Regarding products

Hi, I’m very new at all this and I need some clarification.
As said, the geometric product is the sum of the inner and outer; ab = a.b + a^b.
But if I have e.g., the PGA3D operation (1e123+2e01)(3e123+4e01) = -3 + 2e023.
However, the inner; (1e123+2e01).(3e123+4e01) = -3,
and the outer (1e123+2e01)^(3e123+4e01) = 0. (At least according to ganjas calculator).
So, e023 is not in either of the inner or outer, but still in the geometric? I mean, if you write it by hand its “obvious” that it is in the geometric product. So what’s up with the sum, of the inner and outer?

I guess it has something to do with the inner being a grade lowering, and the outer a grade increasing, so for e01e123 the inner wants a grade 1, and the outer wants a grade 5, whereas the resulting grade is 3?

Can I get some help, or a link to “geometric algebra for dummies”? :wink:

Best,
Dan

1 Like

Only for a vector y is y X = y.X + y^X. In general (so for y not a vector), there are more terms. The GP contains all grades from the absolute difference in grades (the inner product) to the sum of the grades (the outer product). The in-betweens do not have names (though for Y a bivector, the commutator product appears).

2 Likes

Aha! That explains it!
I feel silly, as I should probably been able to find that somewhere.
(I did try! But I probably glossed over it)

Anyway, thanks a lot.
Best,
Dan

1 Like

Hi Dan,

Welcome to the forum !

I happen to think that’s actually a great question to have out in the open. Plenty of introductions to Geometric Algebra take that road and it’s easy to end up with the assumption that this decomposition in inner and outer product is a general thing.

It most certainly is not - as Leo pointed out it is in fact only generally true for vectors - in that sense it is probably not doing GA any favors - one of the core ideas is to treat elements of arbitrary grade (i.e. dimension) on equal footing.

I personally prefer the so called ‘contraction axiom’ approach to understanding the geometric product. (and the ‘lazy’ approach to defining the inner and outer products). In a nutshell :

  • start from \mathbb R
  • add some orthogonal basis vectors \mathbf e_i, called generators
  • the \mathbf e_i satisfy the contraction axiom … mostly written \mathbf {e_i}^2 = | \mathbf e_i |

Think of it as just extending the reals with some new numbers, that are not in \mathbb R but their square is.

\mathbf e_i \notin \mathbb R, {\mathbf e_i}^2 \in \mathbb R

as in linear algebra, arbitrary vectors are written as linear combinations of these \mathbf e_i

\vec x = x_1\mathbf e_1 + x_2\mathbf e_2

Now since you could’ve picked any vector as basis vector, the contraction axiom must also hold for the general vector \vec x (2D example because I’m lazy) :

\begin {aligned}\color{green}{\vec x}^2\color{black} &= (x_1\mathbf e_1 + x_2\mathbf e_2) (x_1\mathbf e_1 + x_2\mathbf e_2) \\ &= \color{green} x_1^2\mathbf e_1^2 \color{black}+ x_1\mathbf e_1x_2\mathbf e_2 + x_2\mathbf e_2 x_1\mathbf e_1 + \color{green} x_2^2\mathbf e_2^2\end{aligned}

Now since everything in green, by definition is \in \mathbb R, the black part of that last equality needs to be zero. Anti-commutativity of different basis vectors follows :

\begin{aligned} x_1x_2\mathbf e_1\mathbf e_2 + x_2x_1\mathbf e_2\mathbf e_1 &= 0 \\ x_1x_2\mathbf e_1\mathbf e_2&= - x_2x_1\mathbf e_2\mathbf e_1\\ \mathbf e_1\mathbf e_2 &= -\mathbf e_2\mathbf e_1 \end{aligned}

Now, the entire geometric product can be seen as applying these two rules:

\text{Same basis vectors \bf{contract}}\\ \boxed{\mathbf e_i^2 \in \{-1,0,+1\}}
\text{Different basis vectors \bf{anticommute}}\\ \boxed{\mathbf e_i\mathbf e_j= -\mathbf e_j\mathbf e_i}

Now for two vectors it becomes easy to see … they are all linear combinations of the \mathbf e_i, so the product between vectors will always involve either \mathbf e_i\mathbf e_i, and will contract into a scalar (part of the inner product), or it will involve \mathbf e_i\mathbf e_j, which will produce bivector components in the result (part of the outer product).

\begin{aligned} \vec x \vec y &= \langle \vec x \vec y \rangle_0 + \langle \vec x \vec y \rangle_2\\ &= \vec x \cdot \vec y + \vec x \wedge \vec y\end{aligned}

for two bivectors b and c, proceeding similarly you’ll find that their product contains scalar (both generators contract, e.g. \mathbf e_{12}\mathbf e_{12}), bivector (1 generator contracts, e.g. \mathbf e_{12}\mathbf e_{23}) and quadvector (no generators contract, e.g. \mathbf e_{12}\mathbf e_{34}) parts :

\begin {aligned} bc &= \langle bc \rangle_0 + \langle bc \rangle_2 + \langle bc \rangle_4\\ &= b \cdot c + b \times c + b \wedge c \end {aligned}

The grade 2 part here is called the commutator (as Leo mentioned above), and is more generally defined as a \times b = \frac 1 2 (ab - ba).

In the case of the product between bivectors, you should notice that the result contains only elements of even grade. This makes the elements of even grade a closed group under the geometric product, commonly called the even subalgebra. (written e.g. \mathbb R^+_{2,0} - in 2D)

The inner and outer products can now be derived using the lazy aproach … they’re really just the geometric product but you’re to lazy to calculate it all, using s for the grade of a and t for the grade of b :

\boxed{a \cdot b = \langle ab \rangle_{|s-t|}}
\boxed{a \wedge b = \langle ab \rangle_{s+t}}

(where the \langle x \rangle_y notation denodes the y-th grade of x)

From this perspective, there’s really no special product, and even the geometric product is reduced to the ordinary product, with its special properties transferred to special properties of the \mathbf e_i generators themselves.

There are two other products that are also worth your attention, one is called the regressive product \vee, defined as a \vee b = (a^* \wedge b^*)^* and the other is called the sandwich product, which does not have its own symbol but lends its name from its form : x y \tilde x, as one of the arguments sandwiches the other.

The wedge \wedge and vee \vee products are your partners for incidence relations, and can be used to join and meet geometric elements. (which one is which depends on your choice of geometry, hyper-plane based (e.g. PGA) or point based (e.g. CGA))

The sandwich product of a 1-vector \mathbf x_a with a general element \mathbf Y will turn out to encode a reflection :

\mathbf x_a \mathbf Y \mathbf x_a

And the combination of two reflections forms a rotation :

\mathbf x_b\mathbf x_a \mathbf Y \mathbf x_a \mathbf x_b = \mathbf x_{ba} \mathbf Y \tilde{\mathbf x_{ba}}

Where the tilde denotes the reversal. This explains why this sandwich product also shows up for quaternions. (which are nothing more than the product of two vectors in 3D - just as complex numbers are the product of two vectors in 2D - both are just a number and a plane - writing that plane takes just one number in 2D, and 3 in 3D …). By projectivizing (as in PGA), everything stays the same, but the planes you reflect in are no longer attached to the origin, and so you can use parallel planes to represent translations (arriving at the dual quaternions). By conformalizing, you can reflect in spheres, greatly extending the transformations and elements you can represent - while still building on the exact same ideas …

Hope that helps you and other readers,

Steven.

9 Likes

As I described in my paper, the sum of the interior and exterior product is valid when one of the arguments is a vector field. It turns out with vector fields this is equivalent to the Dirac operator (it’s a theorem):

Theorem: \Delta^\frac12\omega = \mp\omega\ominus\nabla = \pm\nabla\wedge\omega \mp \omega\llcorner\nabla = \pm d\omega\mp\partial\omega.

This is the so called square root of the Laplacian, i.e. Dirac operator. However, in general this is not enough to fully understand what the geometric product operation is.

It is important to understand that the Dirac operator is an instance of the geometric product, but it is not the most general definition of the geometric product. This is best understood in terms of symmetric difference

Definition: \omega_X\ominus \eta_Y = (-1)^{\Pi(\Lambda X,\Lambda Y)}\det g_{\Lambda(X\cap Y)} \bigotimes_{k\in \Lambda(X\ominus Y)} w_{i_k}

The geometric algebraic product is the \Pi oriented symmetric difference operator \ominus (weighted by the bilinear form g) and multi-set sum \oplus applied to multilinear tensor products \otimes in a single operation.

As defined in this way, the geometric algebraic product is the foundational building block from which the other products are constructed. This is all also mentioned in my paper and the presentation.

The way I see it, the geometric algebraic product is at the root of everything. From this, the Grassmann and also the Leibniz algebras can be derived, on top of which you can also do Clifford-Dirac-Hestenes.

The sum of the differential and codifferential (Dirac-Clifford operator) is not the full definition of the product, rather it is an instance (or theorem) of geometric product when an argument is a vector field. The full definition is best understood in terms of the symmetric difference index algebra.

If my construction from the paper is used, then the Dirac-Cliford product is a theorem based on the underlying definition of the geometric algebraic product, which is more general.

1 Like

Wow. Thank you all for the very detailed answers! It helps a lot! (and a special thanks for one of the best siggraph courses)

Best,
Dan

2 Likes

If the commutator is defined as

image

is there also a symmetricator?

image

It is called the anti commutator (as it is in e.g. physics or group theory etc)

E.

1 Like

No, actually it IS called the symmetrization.

Anti-commutator is another word for it, but the natural term without a prefix is symmetrization.

Calling it an anti commutator is kind of like calling flying “anti-gravity”.

Someone will ask “is there a word for flying through the air” and enki responds “yea, it’s called anti-gravity. E.”