Exterior product of grade zero blades

I am new to GA, to learn more I am trying to implement the operations on the site projectivegeometricalgebra . org using Grassmann.jl.

When I try to replicate the multiplication table for “∧”, all but the top left entry for v ∧ v match. (Grassmann.jl uses “v” and the pga website “1” for the scalar.

Grassmann.jl: v ∧ v = 1
projectivegeometricalgebra: 1 ∧ 1 = 0

using Grassmann
G301 = @basis D"1,1,1,0"
(⟨1,1,1,0⟩, v, v₁, v₂, v₃, v₄, v₁₂, v₁₃, v₁₄, v₂₃, v₂₄, v₃₄, v₁₂₃, v₁₂₄, v₁₃₄, v₂₃₄, v₁₂₃₄)
projectivegeometricalgebra . org uses a slightly different basis:
v31 = v3 ∧ v1
v43 = v4 ∧ v3
v42 = v4 ∧ v2
v41 = v4 ∧ v1
v321 = v3 ∧ v2 ∧ v1
v314 = v3 ∧ v1 ∧ v4
𝟙 = v1 ∧ v2 ∧ v3 ∧ v4
PGAbasis = (v, v1, v2, v3, v4, v23, v31, v12, v43, v42, v41, v321, v124, v314, v234, 𝟙)
Number[b ∧ a for b in PGAbasis, a in PGAbasis]
v ∧ v
v

The table for “∨” has the opposite issue with the bottom right entry for the pseudoscalar:
Grassmann.jl: v₁₂₃₄ ∨ v₁₂₃₄ = v₁₂₃₄
projectivegeometricalgebra: 𝟙 ∨ 𝟙 = 0

What is going on here? Is this an issue of differing conventions, or did I set up the basis in Grassmann.jl incorrectly?

What’s going on here is that an empty exterior product is 1. So \wedge() = 1, and note that

[\wedge(a,b,c,...)]\wedge[\wedge(x,y,z,...)] = \wedge(a,b,c,...,x,y,z,...) implies

1\wedge 1 = [\wedge()] \wedge [\wedge()] = \wedge() = 1 since [\wedge()]\wedge[\wedge()]=\wedge()

So what’s going on here is that Grassmann.jl uses this way of thinking about the result

However, the alternative result used by the other website implies that \wedge() = 0 instead of \wedge() = 1, which is inconsistent with the mathematics I am familiar with.

Therefore, I consider Grassmann.jl to be correct here. However, I would be interested to hear different points of view if there are any.

2 Likes

Thank you, chakravala. Your Juliacon talk was my inspiration to learn about GA.

1 Like

I’ve written an explanation of what’s going on here:

http://terathon.com/blog/wedge-and-dot-products-involving-scalars/

1 Like

Hi @elengyel,

Great to read this post from you - hope you decide to participate more again! I’m in the camp that believes the wedge product with a scalar and between scalars should be non-zero - even tho I think it is an esoteric point with little practical implications either way. Part of this is my desire to be compatible with other places where the exterior product is used (e.g. diff geom. - the product of a non-zero scalar with a non-zero one-form is a non-zero scaled one-form) - now apart from ‘historical compatibility’, the main question for me is the following point in your post :

" If we accept that a scalar quantity s is orthogonal to any non-scalar quantity b "

Could you explain why you think this is a natural choice? For me, the commutativity of the scalar multiplication with any other grade indicates the exact opposite - commuting elements are not in orthogonal subspaces, so the scalar quantity is not orthogonal?

Would love to hear your thoughts,

Steven.

I’ve already stated above my reasoning for why the exterior product of scalars is not zero. This is because the empty exterior product is 1. I understand where the desire comes from to make it 0, but later when I understood that empty exterior products are 1, then I really understood the only way it can be. Why do we need an empty exterior product to he 1? It’s because we need an exterior basis for the entire multivector space, including the scalars. Hence, the exterior basis for scalars cannot be zero, it must be the non-zero scalar 1.

Note that the number 1 creates a basis for the scalars, while the number 0 does not.

I should have clarified that my post was meant to be a response to @zorv’s original question explaining the reasons why my multiplication table was different. It wasn’t my intention to rebut anything @chakravala said or to imply anybody was wrong. On the contrary, I have collected numerous valid arguments in favor of both possibilities (s \wedge t = st vs. s \wedge t = 0), and this has made it difficult to decide which one really is more natural. I have not seen anything other than an axiomatic definition for s \wedge t, and that includes the statement \wedge() = 1, but if the proper behavior of this product can be deduced from some fundamental principle, I’d love to hear about it.

The conclusions I made in my blog post are based on two desirable properties: (1) the wedge product and dot product are disjoint components of the geometric product, and (2) (a) the internal parallelism possessed by every Euclidean isometry \mathbf a can be expressed by \mathbf a \wedge \mathbf{\tilde{a}} = 0, and (b) every isometry \mathbf a also has a norm given by \Vert\mathbf a\Vert^2 = \mathbf a \cdot \mathbf{\tilde{a}}. (These properties also have symmetric counterparts under the dual operations.) The properties in (2) are true for motors only if (1) is required and s \wedge t = 0. That’s what I know. Now if someone can show me how the properties in (2) don’t really matter because there is some other fundamental property shared by all Euclidean isometries that identifies the conditions required to satisfy the internal parallelism, I’ll be perfectly happy to change my thinking.

I would also like to hear any explanations for why it’s acceptable to have \mathbf a \wedge \mathbf b = \mathbf a \cdot \mathbf b for any basis elements \mathbf a and \mathbf b in the algebra, which currently happens under the broadly recognized definitions for \wedge and \cdot if either \mathbf a or \mathbf b is a scalar. If wedge products with scalars are not zero, and we do impose property (1) above, then we must accept the Hestenes dot product, which I know some people around here don’t like. (However, this still won’t make the isometry properties work out.)

I agree, and I was attempting to express the same sentiment near the beginning of my blog post, but it would still be nice to settle the matter one way or another. We appear to also agree that the main question to answer is “Where does this have any significant consequences?”, and I believe this can be answered by identifying nontrivial properties of the objects appearing in PGA that depend on the definitions of wedge and dot products with scalars. The isometry properties listed above and discussed in my blog post are the two examples I’m aware of.

I agree here as well, and I would be interested in hearing why the wedge product (as opposed to scalar multiplication – see below) of a scalar function with a one-form must be the scaled one-form. Is this also an axiom, or can it be deduced from more fundamental principles? An argument from differential geometry could lead to a better understanding of what’s happening in plain old exterior algebra. [Edit: Wait – the wedge product s \wedge \mathbf b is still nonzero and thus not inconsistent with the wedge product of a scalar function and a one-form. The only question concerns the wedge product of two scalar functions.]

The thinking here is that the set of scalars, when embedded in an n-dimensional geometric algebra, all lie along a single axis with basis vector \mathbf 1 in the 2^n-dimensional graded vector space. Anything non-scalar is automatically orthogonal in that vector space because it does not make use of the basis vector \mathbf 1.

It would seem that scalars are exceptional when it comes to properties like this no matter what choice we make for s \wedge t. If s \wedge t = st, then the basis element \mathbf 1 is the only basis element for which the wedge product with itself is not zero. If s \wedge t = 0, then scalars are the only things that commute with everything else without being in parallel subspaces. I’m sure we could come up with more examples either way, and this is very much the kind of thing I was talking about at the top of this post when I said I’ve collected valid arguments in support of both.

I don’t follow how the empty exterior product being 0 or 1 is related to the basis element for scalars. Are you conflating scalar multiplication by elements of the scalar field with exterior multiplication by a members of the grade-0 elements corresponding to an embedding of the scalar field in the vector space? I think these are subtly two different operations. That is, given a true scalar value s from the field and a grade-0 element t\mathbf 1 in the vector space (where t is another true scalar value), the products s(t\mathbf 1) and s\mathbf 1 \wedge t\mathbf 1 have different meanings. The first product is (st)\mathbf 1 while the second product can be zero without creating a conflict.

Thanks for saying that, but I honestly still feel like this is a very hostile environment.

1 Like

Calculating the exterior product of 2 scalars is related in the sense that each scalar is by definition an empty exterior product, as this is the only combinatorial possibility. So, \wedge(s, t)=\wedge(s\wedge(),t \wedge()) = st\wedge() dependence on the empty exterior product. In exterior algebra of original Grassmann flavour, it is desirable to view the multivector space as the combinatorial subspaces with linear dependence characterized by ausdehnung. So the condition \wedge()=1 is natural.

Suppose that \wedge(\wedge(),\wedge()) = 0 = - \wedge(\wedge(),\wedge()), then \wedge() = 0. However, what we want is \wedge() = 1 and so we deduce that \wedge(\wedge(),\wedge())=\wedge()=1. This is wanted since this is consistent with moving around an exterior product without swapping any indices. It’s the index combinatorics to which sign swaps are associated.

1 Like

My position is that this implication is not valid. It is perfectly fine for \wedge() = 1 and \wedge(\wedge(),\wedge()) = 0 to both be true statements simultaneously. I am well aware of the combinatorial structure of Grassmann algebra, and in my opinion, this does not create an inconsistency.

@chakravala, could you please introduce yourself? All the anonymity around here, especially among forum leaders, contributes to a negative atmosphere.

In my way of thinking, I believe this law should be satisfied, which is why I have the implication. It sounds like you don’t assume this relation, but I do.

My name is Michael, you can find out about me on my github repositories. I have been studying math and geometric algebra, that’s all there is really to know about me. My conclusions here are my own thoughts.

For those who are interested, I posted a follow-up here:

http://terathon.com/blog/wedge-and-dot-products-involving-scalars-not-so-insignificant/

It turns out that the precise handling of products involving scalars has larger consequences than originally thought.

One of the first things I talked about was how it would be desirable for the wedge and dot products to be fully disjoint , meaning that if one is nonzero, then the other must be zero.

This is not an axiom in Grassmann algebra as originally defined by Hermann Grassmann. It’s wishful thinking if your own to desire this property. However, as I have shown \wedge(\wedge(),\wedge()) = \wedge() needs to really equal to 1 in Grassmann algebra.

Therefore, your algebra is NOT compatible with Grassmann’s exterior algebra. Your algebra is some sort of other algebra incompatible with Grassmann multilinear algebra. In multilinear algebra, the empty exterior product needs to be the element generating the scalar vector space, a space generated by 1 not 0.

1 Like

http://terathon.com/blog/wedge-and-dot-products-involving-scalars/ This link is broken. Is there any alternate page?

Just for future reference, an appealing way to see why Grassmann.jl’s treatment of scalars is most natural is to note how may be defined for general multivectors in terms of the geometric product:

a ∧ b = \sum_{p,q}⟨⟨a⟩_p ⟨b⟩_q⟩_{p + q}

Here, ⟨\phantom{a}⟩_k is the grade k part. Thus, for scalars, α ∧ β = ⟨⟨α⟩_0 ⟨β⟩_0⟩_0 = αβ, not zero!

If the wedge product of scalars was zero, then the formula above would be messier and enjoy fewer nice properties. See @LeoDorst’s excellent summary paper about this: 10.1007/978-1-4612-0089-5_2.

The same issue , I think, arises with linear vector-valued functions of vectors, f(a), for outermorphisms; f(a wedge b)=f(a) wedge f(b). What is f(1). Hestenes has f(1)=1.

Hestenes would probably agree that 𝒖 = 1 ∧ 𝒖, and hence

f(𝒖) = f(1 ∧ 𝒖) = f(1) ∧ f(𝒖)

so that we must have f(1) = 1.

Both of @elengyel’s blog posts were apparently removed from his blog. (I imagine intentionally?)

But if you go to the wayback machine and paste the now-nonfunctional URLs you can find archived copies of both.