Numerical stability issues on trivial tests


I was just writing and evaluating my baby’s first comparison tests between generating a plane from the join of three PGA points versus a conventional Vector library which constructs a plane from two cross products and a dot from a point. When using the initial points on the standard plane equation to solve for 0, I found that the PGA representation has a small error (about xx.e-05) while the vector representation is purely zero.

I’m using the c++ implementation generated from ganja so they’re all just floats. All of the inital point values fall between the range [-10.f, 10.f]. I’m wondering if anyone else has encountered this or if there are known solutions to get comparable errors with traditional solutions.


It would help if you could be more precise about what you are testing with some code examples.

julia> using Grassmann

julia> basis"2"
(⟨++⟩, v, v₁, v₂, v₁₂)

julia> exp(1+v12) # using Grassmann
1.468693950082839 + 2.287355299021966v₁₂

julia> exp(1+im) # using built-in Complex
1.4686939399158851 + 2.2873552871788423im

For example, in my package the exp function is currently with some loss of accuracy. The algorithms can always be tweaked around a bit and arbitrary precision could be used if desired. The trigonometric functions are the only functions where I currently do any sort of approximation rounding.

Using a standard Vector approach I see errors around 1e-7. Which is expected for floating point accuracy in the ranges you stated. If I use the exact point I created the plane with I’ll get an error of 0 though, but that’s not a really good test. Try any other point on the plane ( within your [-10,10] range ) and you should see errors like I do.

1e-5 does seem a bit high for ganja, but not crazy. It’s actually pretty easy to get inaccuracies of that magnitude. Could be reorganizing the code to keep the relative sizes of numbers involved in arithmetic could help.

julia> 12346.0f0-((12345.0f0 + 1.0f0)/12345.0f0)*12345.0f0

julia> 12346.0f0-((12345.0f0 + 1.0f0))

Hi William,

Welcome to the forum !

The bevhavior you describe is entirely expected - the outer product approach is capable of handling points with a projective coordinate that is not neceserally 1 - i.e. weighted points. It is also quite easy to resolve … try the following :

plane = (p1 & p2 & p3).Normalized;

This brings errors on random points in the plane in the range 10^{-7} - as Orbots mentioned this is the same as the VA approach. (but by renormalizing the plane you throw away its ‘size’ information, it may not be the best strategy in practice).

A plane in PGA encodes more information by correctly maintaining the ideal components. A VA approach that encodes the same information takes 4 cross and 4 dot products. For example,
if one constructs the PGA planes P_i for each triangle in a mesh, one can calculate the mesh total area and volume using :

a = \frac 1 2 \sum_i || P_i || \\
v = \frac 1 3 || \sum_i P_i ||_\infty

where the ideal norm for a plane ||P||_\infty is simply the \mathbf e_0 coefficient. So un-normalized PGA lines and planes encode this extra information (resulting in bigger coefficients, and thus exagerating IEEE floating point artefacts.)

For a quick javascript test of the renormalisation, see here.

1 Like