Projective Algebra of Physical Space

Hello,
I have been Thinking about relativistic PGA. Since I normally don’t do anything, that could use GA, I don’t have a use for it. I haven’t seen anyone explore this approach with regard to PGA. I write this in the hope that someone finds a use for this.
The “Algebra of Physical Space” is an approach to relativity using euclidean 3D GA. For reference see “Electrodynamics: A Modern Geometric Approach” by W. E. Baylis. An introduction can be found here (there it is also shown how it relates to STA).
I have noticed that APS can straightforwardly be extended to PGA so that we can represent the Poincaré group. I think this may be useful even for “non-relativistic” applications, since the relativistic equations seem to have algebraic advantages over the non-relativistic approximations. For example constant accelerations have an exponential form (\Lambda(\tau) = exp(\frac{\Omega}{2} \tau)\Lambda(0)).
The unique thing about APS is the use of paravectors (= scalar + vector) and the intentional avoidance of dot and wedge products, which in my opinion makes the equations much more readable and easier to manipulate.
In the following I will be using notation and terminology of APS instead of PGA.
In APS e_0 is defined as the scalar 1, so that you can write paravectors using index notation and it is also sometimes useful to explicitly write factors of e_0 for illustration purposes. For the dual vector I will be using \varepsilon exclusively.
First let me for completeness repeat the basic notational conventions of APS. Reversion is denoted by x^\dagger. Clifford or bar conjugate is also used and denoted by \overline{x}. The Clifford conjugate reverses the geometric product and negates all vectors (\overline{\mathbf{a} \mathbf{b}} = (-\mathbf{b})(-\mathbf{a})). Here that simply means it negates vectors and bivectors. The combination \overline{x}^\dagger is grade inversion.
APS also defines i = e_1 e_2 e_3 and also uses the terminology real and imaginary. This makes sense since 3D GA can be thought of as complex paravectors and i commutes with vectors.
APS defines algebraic projection operators: “scalarlike”, “vectorlike”, “real”, “imaginary”, “even”, “odd”:

\left<x\right>_S = \frac{x + \overline{x}}{2} = \left<x\right>_{0,3,4},\ \left<x\right>_V = \frac{x - \overline{x}}{2} = \left<x\right>_{1,2} \\ \left<x\right>_R = \frac{x + x^\dagger}{2} = \left<x\right>_{0,1,4},\ \left<x\right>_I = \frac{x - x^\dagger}{2} = \left<x\right>_{2,3} \\ \left<x\right>_+ = \frac{x + \overline{x}^\dagger}{2} = \left<x\right>_{0,2,4},\ \left<x\right>_- = \frac{x - \overline{x}^\dagger}{2} = \left<x\right>_{1,3}

Using \varepsilon i = -i \varepsilon we can define two additional projections “euclidean” and “dual”:

\left<x\right>_E = \frac{x + ixi^{-1}}{2},\ \left<x\right>_D = \frac{x - ixi^{-1}}{2}

Combinations of these can be used to algebraically express every single grade projection (e.g. real euclidean scalar \left<x\right>_0 = \left<x\right>_{RES}) and almost every pair of adjacent grades except \left<x\right>_{3,4}.
The scalarlike elements in APS are just complex numbers, but with \varepsilon we also get dual trivector and fourvector components. The pseudoscalar \varepsilon i is surprisingly categorized as real.

Now to paravectors. In the following with paravector without any qualifiers I will mean real euclidean multivectors p = p_0 + p_1 e_1 + p_2 e_2 + p_3 e_3 = p_0 + \mathbf{p}. They embed a 4D GA in a 3D GA.
Paravectors have a quadratic form Q(p) = p\overline{p} = p_0^2 - p_1^2 - p_2^2 - p_3^2 which naturally has the Minkowski metric.
This quadratic form is also useful to invert an arbitrary invertible multivector M^{-1} = \overline{M}Q(M)^{-1}.
Since Q(M) is always scalarlike (\left<M\overline{M}\right>_S = M\overline{M}):
Q(M)^{-1} = (a + bi + p \varepsilon i)^{-1} = (a - bi - p \varepsilon i)/(a^2 + b^2)
where a and b are scalars and p is a paravector.
A paravector is called spacelike if Q(x) < 0, timelike if Q(x) > 0 and null or lightlike if Q(x) = 0.

The geometric product in paravector space uses alternating bar conjugation, e.g. with paravectors p, q, r, s: p\overline{q}r\overline{s}. Notice that e_\mu \overline{e_\nu} = -e_\nu \overline{e_\mu} for \mu \neq \nu, even with e_0 = 1.
Grade n vectors play a double role in paravector space as n-paravectors and (n + 1)-paravectors (with an e_0 factor). This is actually a feature of APS and not a bug.

Regarding PGA’s inverted terminology with vectors as planes. I don’t think it fits to this approach, because the terminology changes depending on the dimensions. I think the correct thing is to call a n-blade in this dual algebra a nD point. This is consistent and independent of the dimensions. The euclidean part denotes the nD space in which the point lies (and notice that it projects a translation into its space). This also works in paravector space.
Notice how a timelike or spacelike 1-paravector (parapoint?) factors:
p + \alpha \varepsilon = (1 + \alpha \varepsilon p^{-1}) p = (1 + \frac{\alpha \varepsilon \overline{p}}{Q(p)}) p = (1 + \frac{\alpha p}{Q(p)} \varepsilon) p

For representation I would group the components as paravectors like this:
M = p + q i + r \varepsilon + s \varepsilon i
Paravectors fit well into SIMD vectors, \left<pq\right>_0 is the euclidean 4D dot product, \left<pq\right>_1 = p_0 \mathbf{q} + q_0 \mathbf{p} and \left<pq\right>_2 = \mathbf{p} \wedge \mathbf{q} = \mathbf{p} \times \mathbf{q}i. The paravector product can therefore be efficiently computed with common SIMD operations. This is with pseudoscalar \varepsilon i. For the opposite pseudoscalar I would use: M = p + qi + r \overline{\varepsilon} + s i\varepsilon.

I define the Hodge dual x^* as division by the pseudoscalar \varepsilon i on the right with the modified rule \varepsilon^2 = 1. This simply reverses the order of paravectors in the above representation: M^* = s + r i + q \varepsilon + p \varepsilon i

Poincaré-rotors are the unitary multivectors M\overline{M} = 1. You can always factor them into translations, boosts and spacial rotors: M = T L = T B R.
In particular we can reduce the memory consumption by 25% by storing
T = 1 + \frac{t}{2} \varepsilon and L = p + qi. Products easily preserve this factorization:
M M' = T L T' L' = (T (L T' \overline{L})) (L L'). L T' \overline{L} = 1 + L \frac{t'}{2} L^\dagger \varepsilon.
We always have L = \left<M\right>_E and T = (M i \overline{M} i^{-1})^{\frac{1}{2}} = 1 + \left<M i \overline{M} i^{-1}\right>_D / 2.

Odd grade paravectors transform differently from even grade ones:
X' = M X \overline{M} if X is even
X' = M X M^\dagger if X is odd

A 4D point/space-time event is a 4-paravector and has the form X = i + x \varepsilon i, with x a paravector.
The concept of eigenspinor can be extended to keep the origin in the rest frame at X_{rest} = i.
For an eigenspinor we require position X(\tau) = \Lambda(\tau) i \overline{\Lambda(\tau)} and proper velocity u = \left< \Lambda(\tau) e_0 \Lambda(\tau)^\dagger \right>_E.
The rotation rate is \Omega = 2 \dot{\Lambda} \overline{\Lambda}, in the restframe \Omega_{rest} = \overline{\Lambda} \Omega \Lambda = 2 \overline{\Lambda} \dot{\Lambda}. The rest frame never “feels” its velocity. It is always \left< \Omega_{rest} \right>_D = c \varepsilon as one would expect.
The derivative of X is \dot{X} = \dot{\Lambda} i \overline{\Lambda} + \Lambda i \dot{\overline{\Lambda}} = \dot{\Lambda} i \overline{\Lambda} + \overline{\dot{\Lambda} i \overline{\Lambda}} = \left< \Omega X \right>_S.
But also \dot{X} = \dot{x} \varepsilon i = c u \varepsilon i = \Lambda c \Lambda^\dagger \varepsilon i = \Lambda c \varepsilon i \overline{\Lambda}.
So c u = \dot{X}^*. We can think of \dot{X}^* as the derivative of X with respect to \tau \varepsilon i.

A PGA Motor M(t) can be easily turned into a relativistic eigenspinor \Lambda(\tau) = M(\gamma^{-1} \tau) B(v_b) (1 + c \tau \varepsilon) with \tau being the proper time, B(v) = u^{\frac{1}{2}} = (\frac{c + v}{\sqrt{Q(c + v)}})^{\frac{1}{2}} = \frac{1 + u}{\sqrt{2 \left<1 + u\right>_S}}, \gamma = \frac{c}{\sqrt{Q(c + v)}}. The PGA body frame is not the rest frame, which is why it “feels” the velocity (it is at rest relative to the observer, not the moving object).

The proper momentum triparavector for a point mass is:
P = \left< m X \dot{X}^* \right>_I = \left< m X c u \right>_I = \left< m c \Lambda i \overline{\Lambda} \Lambda \Lambda^\dagger \right>_I = \left< m c \Lambda i \Lambda^\dagger \right>_I = m c \Lambda i \Lambda^\dagger
The derivative of P is \dot{P} = \left< \Omega P \right>_I.
A triparavector can be thought of as a position in a 3D slice of simultaneous events. Triparavectors are dual to paravectors in APS and simply have the opposite metric Q(pi) = pi \overline{pi} = -Q(p).

Rigid bodies can also be handled relativistically by applying the rigidity condition only in the rest frame.
The total momentum is also just the sum P = \sum_k P_k.
The inertia map then looks like this: I[\Omega] = \sum_k m_k \left<X_k \left<\Omega X_k\right>_S^*\right>_I
We can split the rest frame rotation rate into components (with \mathbf{a} the rest frame acceleration vector and \mathbf{B} the rest frame spacial rotation rate bivector): \Omega_{rest} = \mathbf{a} + \mathbf{B} + c \varepsilon
If we choose i to be the center of mass in the rest frame, with relative vectors \mathbf{r}_k (\sum_k m_k \mathbf{r}_k = 0), the rest frame inertia decomposes into:
I_{rest}[\Omega_{rest}] = \sum_k m_k \left<(i + \mathbf{r}_k \varepsilon i) \left<\Omega_{rest} (i + \mathbf{r}_k \varepsilon i)\right>_S^*\right>_I \\ = m c i + \sum_k m_k \left<\mathbf{r}_k \overline{\left<\Omega_{rest} \mathbf{r}_k\right>}_R\right>_V \varepsilon i \\ = m c i + (\sum_k m_k \mathbf{r}_k \wedge (\mathbf{r}_k \cdot \mathbf{B})) \varepsilon i + (\sum_k m_k \mathbf{r}_k(\mathbf{r}_k \cdot \mathbf{a})) \varepsilon i
The middle dual bivector term is just the “classical inertia” times \varepsilon i. I don’t know what the right dual trivector term is called.
The rest frame derivative is of course: \frac{d}{d\tau}I_{rest}[\Omega_{rest}] = I_{rest}[\dot{\Omega}_{rest}].
Since \Omega^2 is scalarlike, the derivative of \Omega is simply the transformed derivative of \Omega_{rest}:
\dot{\Omega} = \dot{\Lambda} \Omega_{rest} \overline{\Lambda} + \Lambda \Omega_{rest} \dot{\overline{\Lambda}} + \Lambda \dot{\Omega}_{rest} \overline{\Lambda} = \left< \Omega^2 \right>_V + \Lambda \dot{\Omega}_{rest} \overline{\Lambda} = \Lambda \dot{\Omega}_{rest} \overline{\Lambda}
In the observer frame: \frac{d}{d\tau}I[\Omega] = \frac{d}{d\tau}(\Lambda I_{rest}[\overline{\Lambda} \Omega \Lambda] \Lambda^\dagger) = \left< \Omega I[\Omega] \right>_I + I[\dot{\Omega}].

You can easily convert between this and whatever the projective version of STA looks like, by setting \gamma_\mu = e_\mu \gamma_0. But you don’t gain anything in expressivity by using STA, because you can translate anything physical expressed in STA into APS, despite the fact, that APS only has half as many components as STA. This can easily be seen by the fact that the sum of even and odd vectors in STA has no physical meaning. Also the transition from non-relativistic physics is much simpler, with APS, since it uses the same algebra for both.