If you want to replace 4x4 matrices in your OpenGL shaders and replace them with sandwich products you can only do rotation and translation with PGA if I am right.

How would you do (non-uniform?) scaling?

If you want to replace 4x4 matrices in your OpenGL shaders and replace them with sandwich products you can only do rotation and translation with PGA if I am right.

How would you do (non-uniform?) scaling?

(Non-uniform) scaling is useful transformation. As the name says, it doesn’t preserve the metric (distances and angles). It is in fact a projective transformation, in matrix form an arbitrary diagonal matrix.

In PGA, every sandwich operator with a “versor” is a metric isometry. (A versor is an element of the even sub-algebra with norm 1.) The price for having the smallest possible geometric algebra for euclidean (or non-euclidean) geometry is that there are no sandwich operators “left over” to do other non-metric operations such as non-uniform scaling.

Non-uniform scaling (along with any other projective transformations) can be represented in PGA as an “outermorphism” of the algebra. The appropriate form for acting on different grades is hidden from the user, so that one can simply specify the outermorphism in a familiar matrix form for the 1-vectors and the action on any multivector will be correctly carried out.

What’s more important: having the most compact geometric algebra for euclidean geometry or being able to write some projective transformations using sandwiches instead of as outermorphisms? Each user can decide based on his/her needs.

Perhaps someone else can provide details about a GA that does support non-uniform scaling using sandwiches.

3 Likes

Hi Phew, welcome to the forum !

I’m considering this question in a realtime CG context (which I take from your reference to openGL shaders). In this context there are several factors to consider when changing from 4x4 homogeneous matrices to versors and sandwich products. If you observe that in you current pipeline, the vast majority of your matrices are orthogonal, then the following observations can be made :

- you are using 16 numbers to express 6 degrees of freedom
- because of this, large kinematic chains (just a few parents will do…), quickly induce numerical errors.
- correcting these errors is non-trivial (Gramm-Shmidt is biased since its selection of an initially assumed ‘correct’ vector)
- Iterative algorithms like physics integrators have 10 dimensions to run away from the solution manifold.

Now ofcourse the upside of `4x4`

matrices is the same as their downside, they can encode the full general linear group of transformations, so integrating scaling and the camera projection is trivial. However, since specular reflection I have not concatenated my modelview and projection matrices, similarly I’ve consistently punished artists that include scaling in a kinematic chain as it cripples the one option you have to restore orthogonality. (scaling, both uniform and non-uniform often only makes sense when applied as the very first step of the kinematic chain, and can trivially be implemented with scalar (uniform) or per-coefficient (non-uniform) multiplication, before applying any sandwich products - for the projection step, there is little reason to move away from your standard projection matrix. (although you could, it’s a \vee with the camera point and a \wedge with the projection plane).

To complete the list above, the advantages of the versor representation are :

- 8 floats instead of 16
- 6 degrees of freedom in the pure bivectors. (i.e. everything is on the solution manifold)
- exponential map with closed form implementation
- simple renormalisation of motor keeps you on the motion manifold
- unifies translation/rotation completely (even for kinematics, in any number of dimensions, in euclidean, hyperbolic and elliptic cases)

Additionally, making this switch for the places you now use orthonormal matrices, also implies you get the quaternions (without their from/to matrix functions).

And while it is somewhat more limiting it is also quite easy to build a 4x4 matrix (that can be used to transform your vertices) from a versor in PGA. (then ofcourse you would not be able to e.g. interpolate those per vertex) …

So short story, do scaling first, then versors and keep your projection matrix - that setup will give you the best of each approach.

Cheers,

Steven.

1 Like

Thank you both for your responses!

For practical purposes I thought of exactly what you described @enki. Scale only at the beginning and first with simple multiplication and then do the rotation and translation chain. It also makes sense to transform the final versor to a 4x4 matrix before applying it to many points, since the computational effort is less.

I was still interested in the mathematical description of the outermorphism of scaling. Since I’m not a mathematician that took me a little while to get and what I’ll describe now may be trivial to you, but I think it might be helpful for someone like me who wants to understand it practically.

First, I think of scaling as moving whatever primitive you have (point, line, plane) from/to the origin by a factor. Since in PGA our 1-vectors are planes and we start with our outermorphism for the 1-vectors as @cgunn3 wrote, let’s have a look at a simple plane: x - 1 = 0, or E1 - E0 in PGA. This is the yz-plane moved by 1 along the x-axis. If we scale it by a factor s, it moves to x - s = 0, or E1 - s E0. Since this example is a moved yz-plane it doesn’t make a difference if you do uniform scaling or non-uniform scaling along x. Since any scaling of the 1-vector doesn’t change the plane, you could also write E1 / s - E0 which hints at how you can do non-uniform scaling.

We find that we can scale any plane a E0 + b E1 + c E2 + d E3 as follows: a E0 f + b E1 f / x + c E2 f / y + d E3 f / z. Here x, y and z are the non-uniform scaling factors and f is arbitrary as long as it is not zero. We will look at possible choices for f further below, but what I already want to mention here is that in general the resulting (multi-)vector will not be normalized after scaling even if it was before.

If you normalize after scaling though, f will be gone.

Now for the outermorphism, let’s see what happens to points, since that’s what I’m mostly interested in, so let’s scale the point a E021 + b E013 + c E032 + E123:

a E021 f³ / (x y) + b E013 f³ / (x z) + c E032 f³ / (y z) + E123 f³ / (x y z)

After normalization, we end up with the expected a E021 z + b E013 y + c E032 x + E123.

Now for the choice of f: you can of course simply choose 1, to save some computation if you implement this.

An alternative would be to set f = (x y z)^(1/3), since with this value you would get the scaled point immediately in the above example.

However, probably the smartest choice for f is f = x y z since this turns scaling the plane to a E0 x y z + b E1 y z + c E2 x z + d E3 x y. This avoids any divisions and thus also accounts for the case where one of the scaling factors is zero.

Lastly, I wanted to have a look at normal vectors, since these frequently cause problems when we try to implement shaders with non-uniform scaling. Representing normal vectors as ideal points (a E021 + b E013 + c E032) and scaling these doesn’t work. You have to take the dual view and consider them as planes (a E3 + b E2 + c E1) which can then be scaled and normalized.

I hope this helps someone.

Cheers

1 Like

This is very interesting analysis, thanks.

I have a question. With regard to normal vectors, you say at the end that you have to treat them as planes. Isn’t it possible to just leave out the E123 term (the “origin”) in the point representation to obtain the desired result? After all vectors are just ideal points. That the result is the same as for a plane with the same (a,b,c) coefficients isn’t surprising since the normal vector of a plane \bf p is given by \mathbf{pI} where \bf I is the pseudoscalar \bf.

The math doesn’t work out for this, let’s consider just scaling by x (y = z = 1) and for simplicity f = 1:

The scaled plane would be: a / x E1 + b E2 + c E3

The ideal point (normal vector) would be: c / x E021 + b / x E013 + a E032

Let’s look at one component that should be equal after normalization:

```
a / sqrt(c² / x² + b² / x² + a²) = a / (x sqrt(a² / x² + b² + c²))
```

This is clearly not equal!

I just tried this with a simple code example:

```
a, b, c = np.random.rand(3)
plane = PLANE(a, b, c, 0) * (1 / math.sqrt(a ** 2 + b ** 2 + c ** 2))
normal = plane.Dual()
scale_x = 0.5
scaled_normal = normal + 0 # copy
scaled_normal[12] /= scale_x
scaled_normal[11] /= scale_x
# normalize
scaled_normal *= 1 / math.sqrt(scaled_normal[11] ** 2 + scaled_normal[12] ** 2 + scaled_normal[13] ** 2)
scaled_plane = plane + 0 # copy
scaled_plane[2] /= scale_x
# normalize
scaled_plane *= 1 / math.sqrt(scaled_plane[2] ** 2 + scaled_plane[3] ** 2 + scaled_plane[4] ** 2)
print(scaled_plane)
print(scaled_normal.Dual())
```

What we get are two lines that should be the same if you could simply scale the normal:

```
0.7889046e1 + 0.5013367e2 + 0.3553745e3
0.3055924e1 + 0.7767971e2 + 0.5506356e3
```