How to actually compute the dual and confusions about wedge product in PGA?

To start with, my background is in engineering and software dev. I have finished and grasped most of this series on GA which deals with R_{2,0,0} and R_{3,0,0}. So I get how if I define a vector, bivector etc, I can do a geometric product between them, analytically on paper as well as numerically if given actual numbers.

Now I have watched the SIGGRAPH course on PGA and have the notes. Conceptually, I sort of get how each point in R_{(2,0,1)} has a dual line that passes through projective plane at its origin. And each line corresponds to a plane that passes through origin.

I get that in this geometry, I should consider a vector L = ae1+be2+ce0 represents a line, and a bi-vector, A = p1 e_{12} + p2 e_{10} + p3 e_{20} represents a point (I am not entirely sure why, but let’s go with it).

  1. How do I actually compute the dual of one of these? That is, how do I compute J(L) or J(A) ?.
    In R_{3,0,0} I know that a geometric product with I gives you the dual of any element in the space. Is that how it is done in PGA? For example, if I have a line defined as e0 + 2e1 + 3e2, what is it’s dual? Update: I have looked at code generated by bivector.net. I see that Dual is defined as:

    def Dual(a):
     """PGA2D.Dual
     
     Poincare duality operator.
     """
     res = a.mvec.copy()
     res[0]=a[7]
     res[1]=a[6]
     res[2]=a[5]
     res[3]=a[4]
     res[4]=a[3]
     res[5]=a[2]
     res[6]=a[1]
     res[7]=a[0]
     return PGA2D.fromarray(res)
    

So it would seem like the dual of any multivector is obtained by reversing the order of its coefficients. Why is it so?

  1. The course notes go on to say (Figure 6 of notes) that the wedge product of three vectors is a point? My entire understanding until now was that the wedge product of two vectors formed an area and three vectors formed a volume. How did this change in PGA?

I have found many abstract explanations (e.g.in 5.10 of course notes) most of which went right over my head. I couldn’t find a solid explanation for this with some actual example of the operation on a point/line. Does something like this exist?

1 Like

I think you will find specific answers to your questions in PGA4CS (also on bivector.net) though that treats these issues in 3D, not 2D. It is meant to be concrete.

Remember PGA is a model of Euclidean geometry, lifting that geometry to a more abstract space of which we use the GA in a clever way. R(2,0,0) and R(3,0,0) are not. They give you a sensible intuition and confidence about how GAs work in general, structurally. They are also practical, in the sense that they give the geometry of 2D/3D directions and those occur locally everywhere in other spaces (where tangent spaces are flat Euclidean space with a distinguished origin). But you should try not to copy the semantics of one model (directions in space) to try and understand the semantics of another model (Euclidean rigid body motion)

Monitor bivector.net. Fairly soon there will be a video by Steven De Keninck (item 0 in the GAME2020 series) about PGA; and I have recorded an invited talk for CGI/ENGAGE which we will post in October. Both talks will show how PGA becomes more natural and acceptable if you start from the representation of motions, and only after that decide how to represent your objects and their representation to ‘move nicely’(covariantly).

Leo

3 Likes

As I understand, in Geometric Algebra (GA) the dual of a blade is the rest of the space, ie: what you have to multiply a blade by to get to the top blade (the pseudoscalar). Thus in 3D, meaning G(3,0,0), the dual of e12 is e3. The dual of a blade is usually obtained by post-multiplying by the pseudoscalar, but in PGA, meaning G(n,0,1), this doesn’t work because the pseudoscalar is (for n=3) e0123 and the e0 in this (the null direction) will multiply with any blade with another 0-index in it to produce 0. Although this seems problematic, it doesn’t really matter. You don’t need the pseudoscalar multiplication to define the dual, which is perfectly well defined as being the rest of the space. So the reason the coefficients reverse order (in the code by enki that you quote) is just a consequence of listing the terms in this order {1, e0, e1, e2, e01, e02, e12, e012} : you see that the dual of blade i here is blade 7-i, for i=0 to 7.

Your second question goes to the heart of the PGA innovation (and controversy). Hopefully complementary to what Leo has said above, I would add that yes, at first it is unpleasant to find the wedge product reducing the dimension of geometrical objects, as you say. But nothing actually changed from regular GA’s. The wedge still increases the blade dimension, but because, in PGA, Euclidean objects reduce in dimension as the grade of the blades representing them increases, the strangeness you point out results.

Personally, I found it quite enlightening to slog through the long debate here (so long I could not get Firefox to print it):

Much of it is concerned with Eric Lengyal’s claims here:

http://terathon.com/blog/projective-geometric-algebra-done-right/

Like you, Lengyal was perturbed by the use of the “dual algebra” for Euclidean objects in PGA, and he goes on to dualise the operators instead of the objects. This produces all the same computations. I was satisfied by their debate and, concluded that there are two equivalent ways to talk about it. Whichever you prefer (and I do prefer Gunn’s, although it is not yet available as a wall poster on amazon like Lengyal’s: Amazon.com), if you understand what they debate about above, you will understand much more about your question. It is already weird enough to be in projective spaces, so hey, why not reverse the dimensional significance of grade? It recasts all previous projective geometry in an interesting new light. My view is that this is deep magic. In practical terms, it works, and it is, remarkably, a relatively new and powerful advance in Euclidean Geometry. Such things appear, in my calculation, about once every 1000 years.

It would be nice to see this all fleshed out properly in a tutorial paper, as I agree with you that the “Course Notes” can be a bit technical in places for the novice. I found Chris Doran’s remarks on the issues you touch on to be also instructive:

http://geometry.mrao.cam.ac.uk/2020/06/euclidean-geometry-and-geometric-algebra/

And I think this forum can help quite a bit in this respect. Certainly it has helped me. - Tony

1 Like

@bellinterlab. I’m not exactly clear what you mean when you say:

" in PGA, Euclidean objects reduce in dimension as the grade of the blades representing them increases"

It does indeed appear at first sight that in the dual construction (i. e., the 1-vectors represent planes) that increasing the grade decreases the dimension of the associated geometry. However, it’s my belief that a closer look at what we mean by “dimension” shows that the dimension always increases with grade.

In PGA whatever primitive is represented by 1-vectors, has dimension 0, and the higher grades are built by wedging these 0-dimensional objects together . This is familiar to us when the 1-vectors are points, the wedge is the join operation, and wedging two 1-vectors gives a 1D line, and wedging 3 gives a 2D plane. This plane as 2-vector is a composite object and can be thought of as being made up of all the points that lie in the plane.

When planes are 1-vectors, then the higher grades are also built up by wedging 1-vectors, but now the wedge is meet and planes are considered indivisible, i. e., 0-dimensional. The wedge of 2 planes is their intersection line (1D), and the wedge of 3 planes is their intersection point (2D). This point as 2-vector is a composite object and can be considered as made up of all planes that pass through it.

I’ll stop here and refer you to a previous post here that includes more details on this question.

Please post again if anything is still puzzling.

Well, I’ve been through it all again (hence the delay) and you are right: what we think of as n-1 dimensional Euclidean hyperplanes are the grade-1 primitives of the Geometric Algebra G(n,0,1), and since that’s a 1-up projective algebra, they are actually 0-dimensional geometric objects. Further, since G(n,0,1) is the minimum algebra that encodes them, it is justified to claim that such planes ARE 0-dimensional, and that points ARE n-1 dimensional. But in this, we need to decouple the notion of how many numbers it takes to specify a point within a linear subspace (our Linear Algebra definition of a subspace’s dimension…), from the notion of the grade of the Euclideanly-transforming Geometric Algebra blade that represents that subspace within the GA. Normally in GA’s these two numbers go up together, but in the PGA case they run oppositely: that’s what I meant in the statement you quoted.

Still, it’s a bit of a Copernican operation on our minds. How is it the case that maths gives such a different answer on dimension and primitivity to the one our minds conceive? It must be because there are these two different definitions of dimension at play. In normal linear algebra, a point has no substructure but a plane does because I cannot project onto linear subspaces from within a point. But in a Euclidean PGA, a plane has no substructure because it is not the wedge of anything. Would it be helpful to decouple the notions of dimensionality and primitivity?

I was intrigued by Eric Lengyal’s point: since meet and join are in both the standard algebra and its dual, why not also both dot and antidot, and antireversion, and thus the antiproduct too; and indeed it may be convenient, notationally, educationally, and otherwise, to have them all. This enables Euclidean geometric calculations with points as vectors and so on, and seems to confer equal status on the standard algebra P(R(3,0,1)). But properly speaking G(3,0,1) is generated by its own geometric product alone, not by its anti-product, and that product, when used to exponentiate bivectors, moves vectors around as if they are Euclidean planes, and trivectors around as if they are Euclidean points, exactly as you say. Since in a GA there is only the grade of the object and the space it inhabits to indicate its dimension, and since the Euclidean object encoded by a blade in an n-dimensional Euclidean PGA can be said to have dimension n-1, it follows that, according to GA’s, the planes in the space we live in are 0-dimensional. Furthermore they are primitives, because the constructive operation in a GA is the wedge, and these planes are the base vectors. It follows, in a further mind-bend, that the primitives of G(3,1,1), what you might call the projective Minkowski/space-time algebra, are 0-dimensional volumes. Interestingly there we have a set of 10 bivectors (dual to 10 trivectors…) available to exponentiate and move things around STA-style. I project that Lorentz will transform in his grave.

In the above version of your argument, which I wrote to help myself understand (sorry if it makes things worse for anyone else!), I focused on the geometric product and stayed away from the issue of when and whether meet or join is the constructive operation, since that confusingly shifts around in the way you indicate. I found your 2011 paper to be very helpful in understanding your perspective.

But still, we are left grasping for further intuition about why all this is the case. In your
GAME talk at 59:20, you say “the dual metric is less degenerate”, when talking about the issue of the differences between the standard P(R(3,0,1)) and dual P(R*(3,0,1)) algebras. What does this mean? I see what it means to projectivise the exterior algebra of a vector space or its dual, but I still don’t really see in my mind why the isometries, or angle and distance preserving transformations, arise in the latter rather than the former. Maybe the answer is “they just do”, but I know you have a more precise notion of the form of the asymmetry at play. In what exact sense is the dual metric more degenerate? And what is it about this extra degeneracy of the dual metric that makes the exponentiated bivectors perform correct Euclidean transformations on the dual objects of G(n,0,1) but not on the standard ones? It must have something to do with the different rotational symmetry groups that points, lines and planes have, and how this meshes with the two algebras. It also must relate to your statement at 59:35 that the motion group (of exponentiated bivectors) is generated by reflections in planes, not points.

NOTE ADDED: I did find some helpful material on my last paragraph of questions, in sections 3.6.1 to 3.7.1 of this paper. I didn’t quite see the answer to my questions though. By the way, just to make sure I am correct: I know that if I want to implement P(R*(3,0,1)) in G(3,0,1) I transform trivectors, using exponentiated bivectors, while imagining them to be points (etc). But if I want to implement P(R(3,0,1)) as a GA, I use exactly the same algebra G(3,0,1) but I just imagine that the vectors are points, and so on, instead? ie: from the GA point of view there is only one algebra, and the two schemes arise from the interpretations of the blades.

(My apologies for writing at such length. It just seems these are very important things to understand. By the way, I’m still thinking about physics and the role of the weights and I will get back on this topic at some point.)

1 Like

@bellinterlab I’m glad you’ve taken such pains to describe your thoughts and feelings on this point. I expect that during the time of Copernicus there were also such discussions; it was only after a long process of assimilation that the new ideas took root and began to appear “self-evident”.

I want to clear up one point you raised regarding my remarks in the video at about 59:00 regarding the “more” and “less” degenerate metric. I did not express myself as clearly as I could have. Here’s what I meant to say:

In the (3,0,1) signature, three basis 1-vectors square to 1 and 1 squares to 0 (that’s what the signature means). There is an induced signature on the 3-vectors (you can calculate it yourself: notice that three of the four basic tri-vectors have e_0 has a factor, hence each squares to 0, only e_1 e_2 e_3 has a non-zero square (-1). So the metric on the grade-3 vectors is (0,1,3). The latter is “more degenerate” than (3,0,1) since it has only 1 non-zero square.

Now you can apply this to the situation at hand. You know by considering euclidean geometry that in order to measure the angle between planes you need the metric (3,0,1) (that’s discussed in several of my papers; the main argument is that the angle between two planes depends on the first 3 coordinates (the “normal” vector) but not on the fourth. By the previous paragraph, the only way to get this metric on the planes is to take the planes to be the 1-vectors, that is, use the dual construction for PGA.

I hope that clarifies what I was trying to say in the video.

Real soon now I hope to fulfill my promise (made on another thread here) to describe the space you get when you use the standard construction, that is, points are 1-vectors. This I hope will help you and others to get the full picture and begin to see the beautiful symmetry that is present here. The standard construction for (3,0,1) gives rise to a completely different metric space (called “dual euclidean space” or “counter-space”) that has many interesting features in its own right and should not be thought of as some kind of second-rate representation of euclidean space.

1 Like

You can indeed do everything while keeping vectors = points and trivectors = planes, but using the antiproduct. Mathematically, the algebras are identical. However, there’s more to it. You don’t just want choose one product or it’s dual and throw the other one away. There are good reasons to recognize both of them simultaneously, as I’ve described here:

http://terathon.com/blog/symmetries-in-projective-geometric-algebra/

1 Like

That’s a really great blog post. You have new formulae for duals, you explain the duality of the 2 norms, you have a new composite norm that is always a dual number (instead of only sometimes), you have cool anti-projections onto lower dimensional spaces, the list goes on.

No-one will ever change the fact that the native geometric product of G(3,0,1) motors vectors around like planes. I understand holding to that. But your results show that the anti-operators of a Geometric Algebra belong to it, whether one works in the direct or dual basis. I agree that to not include them disrespects the symmetry of the set of basis blades, obscures structure, limits insight and complicates computation, particularly in the projective case.

Thank you for appreciating the algebraic structures and understanding that it’s desirable to have both products whichever way you want to look at the geometry.

The product ⟑ transforms vectors like planes and trivectors like points. The antiproduct ⟇ transforms vectors like points and trivectors like planes. The two products are perfectly symmetric, and neither is more native than the other. I think part of the issue is that there’s a feeling in people’s minds that the product ⟑ somehow has a higher importance or is more natural than the other product ⟇. It’s not. They’re equals.

@elengyel: I wanted to share a quick feedback after skimming over your blog post. Anything that helps spread awareness of duality, as this post does, arouses my interest. Your formulas (which I haven’t had time to inspect in full detail) remind me of the work of Eduard Study who invented much of the subject of dual quaternions in his book “Geometrie der Dynamen” from 1903, a book I got to know when working on my thesis. Unfortunately, that book has never been translated into English and it’s hard enough to read in the original German, so his work remains un-appreciated in the English-speaking world. In any case, you can’t read it, or your post, without coming away with a deep sense of how much remains to be discovered in this domain.

I hope to find time to address some of the more concrete issues you raised in your post at a later time, but there is one general question I’d like to ask now.

If I understand the various “anti-” products that you have introduced, they are analogous to the regressive product. All PGA implementations have a duality operator D ( the two most common ones being Hodge duality and Poincare duality). If we denote the progressive product by \wedge and the regressive product as \vee, then we can (and do) define the regressive product by x \vee y := D^{-1}(D(x) \wedge D(y)). Could we say that \vee is the “anti-product” of \wedge? Then, if \wedge is join, then the anti-product \vee is meet, etc.

It seems that it should then be possible to implement all the anti-products in your approach using the same strategy as used for the regressive product. If P(x,y) is some part or whole of the geometric product in the original algebra and \widehat{P} is the corresponding “anti-product” in the dual algebra, then \widehat{P}(x,y) := D^{-1}(P(D(x),D(y))) . (Maybe x and y have to be switched on the RHS.)

If that’s so, wouldn’t it be possible to extend the existing PGA API’s by 1-line implementations of the anti-products that you’ve introduced? (Users are free to take advantage of these new functions or not!)

I’m asking because I see the need for the PGA community, in spite of the very productive differences that we represent, to also search for commonality as much as possible. It’s the balance between individual and collective that I think is healthy for both. The power of the new and unfamiliar concepts in PGA is the source of its strength but it can also be a curse if we allow it as a result to splinter into many conflicting versions. I think we’re all of us active in this forum are working each in their own way working to help further the spread of PGA to the wider community, spreading the “good news” so to speak, even as we continue to learn about it ourselves. Won’t that process be best served if we also take time to reflect on how we can maintain unity in diversity in our strivings? Where there’s a will, there’s a way. End of sermon. :innocent:

3 Likes

Yes, each product is related to its associated antiproduct by the GA equivalent of De Morgan’s laws with the small generalization that the abstract complement operator, the dual D in this case, isn’t necessarily an involution as it is in set theory and logic. \widehat P(x,y) := D^{-1}(P(D(x),D(y))) is exactly how I implement all antiproducts in Mathematica. (I refer to ∨ as the “exterior antiproduct”, or just the “antiwedge product”.) They are prominently featured at the top of my reference poster:

Something else you’ll see on this poster are the filled and empty bars in the table of basis elements. By recognizing that a basis element can simultaneously be characterized by its full dimensions (grade) and its empty dimensions (antigrade), the fundamental duality inherent in GA is made clear at the most basic level. Anything that can be called point-like by counting filled bars can also be called plane-like by counting empty bars, and vice-versa. I mentioned on Twitter yesterday that if you assign a 1 to filled bars and a 0 to empty bars, then each basis element in an n-D algebra has a unique n-bit code. In the nondegenerate case, disregarding sign, the geometric product then amounts to an XOR operation, and the geometric antiproduct amounts to an XNOR operation. You could also choose to assign a 0 to filled bars and a 1 to empty bars, and the two operations would then exchange places. The takeaway is that there is perfect symmetry, and there can be no statement made about points that is not also true, from an opposite perspective, about planes, and there can be no statement made about planes that is not also true, from an opposite perspective, about points. (And this naturally extends to elements of higher grade/antigrade in 5D+.)

Of course, the dual is a NOT operation on the basis elements, but we are free to choose a sign convention. I haven’t worked through it completely, but it would appear that we can actually choose which orientation is positive and which orientation is negative independently for each grade, which would mean that there are 32 valid dual operators in 4D GA, and only two of them are involutions. (It’s a much more manageable 8 if we require 1 \rightarrow \mathbf{I} and \mathbf{I} \rightarrow 1.) I prefer right and left complements (which are inverses of each other) because they are easy to understand and they have the nice relationships with reverse and antireverse that I highlighted in my blog post.

Thank you for your thoughtfulness in this regard. I agree that unity and commonality would have great benefits for everybody, but there are still people in your camp who are actively working against this goal. As recently as this week, De Keninck has advanced his personal vendetta to discredit my viewpoints, and that shit needs to stop immediately if you want cooperation to have any chance. While his GAME2020 lecture contained many interesting insights, it’s clear that his entire motivation was to disparage the vectors-are-points model by claiming there is geometric intuition that exists only in the vectors-are-planes model, and he continued to preach this in the video’s comments. If you understand duality as I have described it above, then you must also understand that any such one-sided advantage is impossible. I pointed out this general feature of GA while SIGGRAPH 2019 was still in session only to be subjected to ferocious backlash. Until I have reason to believe that De Keninck (and at least one other person that I won’t drag into this by name) is going to drastically change his attitude, I’m afraid I have no choice but to continue my GA research independently.

I expect you’ll hear some whining about how I’ve told some people that your SIGGRAPH course contains information that has since been proven to be untrue. A lot of people come to me with questions about GA, and if they happened to start learning with your course, they are usually struggling with what they now believe to be an unbreakable requirement that vectors must be planes. It takes a lot of effort for me to explain that both models are equally valid and they can keep vectors as points if they want to. But it’s especially difficult when members of the “antiproduct is crazy” tribe swoop in to trash everything I have to say whenever they get a Google alert for the words “geometric algebra”. Maybe you can understand if I preemptively inform people that new developments have proven that the SIGGRAPH course doesn’t paint the whole picture when a link is posted in my neighborhood so that, at very least, people can go into it knowing that some of the subject material has been superseded. I’m not telling anyone that the vectors-are-planes model is wrong, but only that the requirement that vectors be planes is not correct. Ideally, a notice could be attached to the course stating something along the lines of “In the time since this course was prepared, advances have been made in this subject, and it is now known that an equally valid model in which vectors are points can also be constructed.”

1 Like

I’m not aware that the SIGGRAPH course notes claim that 1-vectors can’t be points. Indeed, if you look at Sec. 5.10.2 you’ll find a description of the Poincare map J that moves an element between its representation in the plane-based algebra and its representation in the point-based algebra (sometimes called the dual coordinate map). Using it following the approach sketched in my previous post, all the anti-products can, it seems to me, be directly implemented within the existing PGA framework. (See the related discussion below of Hodge and Poincare duality.)

It’s also important I think to point out that the course notes are not the beginning of PGA but a fruit of many years of work (the biggest part of it is a wonderful chapter of 19th century mathematics). The current form of PGA has been described in my thesis (2011) and various articles (all referenced in the course notes).

In my thesis for example the J map is described in detail in Sec. 2.3.1.3 (a small correction for the grade-n elements published on this forum a year ago here). If you follow that argument (based on index sets and their complements) I believe you can recognize the bit-string representation of the basis vectors and the use of XOR to calculate the dual coordinates that you describe in your. previous post. (Please correct me if I’m wrong.)

I do not mention this to diminish the discovery you have made; but to raise the question, how can a community of learners achieve their highest potential? To avoid re-inventing the wheel, it seems to me that each one needs to take the responsibility, as far as they are able, to educate themselves about what has already been learned and codified. Especially researchers who feel drawn to original research as yourself stand to benefit the most by thoroughly informing themselves about what the ancestors have wrought.

Regarding the issue of “camps”: I don’t identify myself with any camp. In fact, exactly on the question of duality in PGA there is an ongoing and lively discussion regarding the right way to do it, which you can read about on this forum (search for “Hodge” or “Poincare” for example). This difference of opinions has possibly contributed to your perception that the point-based algebra has no place in PGA: the use of the Hodge duality (which has several vocal supporters in the community) at first glance appears to support that interpretation. On the other hand, the Poincare duality (IMHO) provides “equal citizenship” for the two algebras in a clear and unambiguous way. It’s the original form of duality in PGA and it seems to me that it matches your needs. What do you think?

Looking to the future: I’m hopeful that it will be possible to adjust our mental maps in such a way that we see that Hodge and Poincare are essentially identical (operationally), leaving each user free to interpret the map as he pleases (as a self-map (Hodge) or as a map between algebras (Poincare)). A bit like religious freedom, a cultural advance that allowed a great flowering of collaboration. I intend to provide a fuller discussion in an updated version of a short white paper devoted to this and related issues. If and when I get that finished I’ll post it here.

At the time those notes were written, nobody was aware of the fact that the whole algebra also worked with 1-vectors being points. The idea was completely alien to all of you when I brought it up. You yourself mocked me for it right here on these forums, De Keninck wrote a long blog post saying I was wrong, and Ong wrote a long blog post that basically called me a crackpot. These are not the actions of people who already knew about it. Now that you realize I was right, you’re pretending that it was already understood and that you supported it all along. That really sucks.

Sigh.

I’m not claiming that the XOR operation was my discovery (and you mentioned calculating dual coordinates, but XOR just calculates the geometric product, so it’s not clear you were following 100%). I was just using it as a tool to highlight some basic things about duality to some other people, and I mentioned it again here.

That term is somewhat of a misnomer. Associating points with 1-vectors and planes with 3-vectors can still be considered a plane-based algebra. The operations are still based on reflections through planes.

@elengyel and the readers of this thread: due to other commitments I have to withdraw from the conversation at this point. I believe that developing a common framework for PGA is in everyone’s best interest. But of course it has to arise out of mutual agreement and insight. Perhaps later the circumstances for this will be more favorable. In the meantime, I wish everyone productive “PGA-ing” in whatever form that takes.

My apologies for taking us all down that road again. But thank you all for all the ideas in your responses. I think everyone seems to agree that although there are many symmetries present in duality, between meet and join, product and anti-product, and so on, there is one fundamental asymmetry introduced by the null direction. And this makes planes under the geometric product in G(3,0,1) generate Euclidean space, hence: projective/plane-based PGA. The anti-product can equally (!) represent this asymmetry. The asymmetry in the underlying algebra, and the symmetry in its representation, are two different things. Choice of representation is important and investigation and debate will continue. But stepping back, PGA is just an incredible insight, one of the best I have ever seen, in all likelihood the beginning of something quite far-reaching. May we all enjoy our time with these wonderful ideas.

tantony, I hope we answered the question! :wink:

2 Likes

To the people looking for a practical answer:

The dual of a basis blade a is the element a_D that \textbf{multiplied from the left}, produces the pseudoscalar I.

a_D*a=I

|a_D| (without sign) has all the versors absent in a, and none of the versors contained in a.

To get the sign, multiply |a_D|*a, and take the resulting sign.

Example: for \mathbb{Cl}_{2,2,2}, it has the basis versors \epsilon_1, \epsilon_2, e_1, e_2, \gamma_1, \gamma_2, where \epsilon_i^2=0, e_j^2=1, and \gamma_k^2=-1

Then the pseudoscalar is I=\epsilon_1\epsilon_2e_1e_2\gamma_1\gamma_2=\epsilon_{12}e_{12}\gamma_{12} (in the notation of the software of Bivector.net is I=e_{012345})

if a=\epsilon_1e_1\gamma_2, then |a_D|=\epsilon_2e_2\gamma_1

Because |a_D|*a=\mathbb{-}\epsilon_1\epsilon_2e_1e_2\gamma_1\gamma_2, the sign is negative, then the dual of a is

a_D=\mathbb{-}|a_D|=\mathbb{-}\epsilon_2e_2\gamma_1

In Bivector.net notation:

a=e_{025}
a_D=-e_{134}

To all that want to compute the dual in 3D PGA (and also in ST PGA),

To compute the dual, I offer this paper:

In this paper, I give methods for computing the dual in 3D PGA. The methods give the geometric entity dualization as described in papers by C. Gunn. There never seems to be any clear method of implementation of the dual given by Gunn or others. My method(s) are explicit (algorithmic or algebraic), not just theoretical descriptions of what the dual should be. The dual that I give is slightly different than the tables of duals that others (Gunn and Dorst) give, but the new dual method appears to be correct. The duals problem is solved in my paper as far as I am concerned. The 3D PGA dualization is an anti-involution, and a kind of Hodge dual (depending on how it is defined).

The paper also has more in it on dual quaternions and a doubling of PGA for quadrics.

For those interested, I made a start on Space Time PGA in a short paper:

In v1, I forgot to define the unit pseudoscalars, which are I_3=e1^e2^e3, I_4=I_3^e4, and I_5=e0^I_4. This is very straightforward, borrowing from 3D PGA and Conformal Spacetime Algebra (CSTA), which maybe is not well known. It includes two methods for taking the duals in ST PGA. Entities dualize with same orientation, and the dualization is a nice involution in ST PGA. It is also a kind of Hodge dual.

Comments are welcome. Probably, there will be many criticisms against my method of taking the dual, since it does not match what has been published by others previously, but as far as I am concerned my duals are correct. Enjoy.

Robert Benjamin Easter.

I’ve explained what the dual should be countless times over the years. Here is a video I made based on my 2021 paper, which is based on my earlier work on the Hodge complement, what I’ve been trying to tell these people for years before.

Most people in the GA community use an incorrect variant of the complement operation which ultimately contributes to inconsistencies such as 1=0 among other errors they make (such as the interior product).

I don’t immediately have time to check @Robert_Easter work on the complement.

My definition of Hodge complement is made such that it maintains compatibility with theorems in other branches of mathematics, such as the differential geometry of the 20th century.

Specifically, my definition is \star\omega = \tilde\omega I which I explicitly stated for years now. @Robert_Easter is incorrect when he says that people don’t have an algorithm or explanation for it, I have given this definition for many many years now, and it’s an exact definition in terms of geometric product, so if you got an algorithm for geometric product you can also derive this algorithm.

@Robert_Easter can download Grassmann.jl and verify whether his work is consistent with mine, and if it is then it’s also consistent with historical differential geometry and the historical Hodge complement. I may check this myself later, when I feel like it, but at this very moment I write this I dont have time to check.

1 Like

@chakravala Well, if you have a verifiable preprint from 2021, on a preprint server with accepted timestamp and date, that has what I have already, then I’d like to get a link to your paper and take a look. Otherwise, every paper and online resource that I have found gets the dualization in PGA wrong, including in current and recent literature (published and in preprints). There seems to be long running discussions where it appears to be admitted that the dualization is a problem, where no one has figured out for sure how to handle the signs correctly to maintain orientation. I am not concerned with maintaining any kind of compatibility with old theory, but simply getting the dual right for 3D PGA (also in ST PGA). I took a very empirical and pragmatic approach, but found the dual of PGA to actually be a simple anti-involution that can be implemented as a kind of Hodge dual in a corresponding non-degenerate algebra. The Hodge dual is also a very confused subject, with more than one definition out there in the literature. I defined the Hodge dual in geometric algebra as defined in non-degenerate algebras, allowing for multiplication or division by the unit pseudoscalar on LHS or RHS, and also modifications using sandwiching as explained in my paper, allowing for some generalization of the Hodge dual. It is not really that complicated. My definition, or defining relation, of the Hodge dual for geometric algebra is a very established formula that is a very general and reliable formula in all pseudo-euclidean metric algebras. So, you just have to take a look, because it is compatible with some definitions of the Hodge defining relation as given in other literature.

Thanks for taking a look, and I’m curious about your paper and if it already really has it. I doubt it. At a glance, looking at your video, I do not think you define the Hodge dual the same, so it is unlikely to be what I have. I am curious to know if there really is any prior work before I attempt to publish and waste time and effort if it really has been done already. If someone can show how my dualization is wrong, then great, I want to know. But, I have tested it, and it works rather perfectly.

There are not multiple definitions of the Hodge complement, there is only one correct way to define it.

If your definition does not match mine, then yours is not a Hodge complement and therefore yours is also incorrect. The Hodge complement is based on the original Grassmann complement, which is over a century old and has been discussed for over a century. Most people studying geometric algebra have not done their due dilligence in using the correct definitions compatible with the original Grassmann definition.

If you say your definitions do not match mine, then I do not need to bother with your work any further, as it is then incorrect. My definitions match the original Grassmann and Hodge definitions and I have given the proofs in my preprints and in my video. However, you merely need to install my Grassmann.jl software to check the results, as my preprint and video merely describe what my software has encoded since 2018. My definitions go back to 2018, but they are actually just the same as Grassmann and Hodge original definitions, just expressed with the geometric product.

So it sounds like I don’t even need to bother with checking your paper, since you just told me that yours is wrong and not compatible with the original Grassmann/Hodge stuff.