How to compute the eigenvectors of (inertia) tensors in GA?

Is there any way in GA to solve the eigenvector problem without matrices ?

For example considering the tensor of inertia of an uniform cuboid wrt the center of mass ;

T(u)= a a ^ u + b b^ u + c c ^ u

What is the best way to compute or translate this to the inertia tensors WRT to an edge ?

And is there any general geometric method to compute the eigenvalues and eigenvectors .

This is an important problem in dynamics because the eigenvectors of such symmetric tensors determine the 3 principal axes for any kind of body for which the Euler equation can be written decomposed into 3 scalar equations .

If the space is dimension n, then a matrix may be represented as a bivector in R(n,n) using only the cross terms eifj… of which there are n^2. Then you may use rotors in the space spanned by the ei and in the space defined by the fj; for example c+se12. to generate rotations And this may be used to diagonalize the matrix with rotations. But the GA picture is most useful for understanding what is going on, but the actual calculation would most profitably use the code developed over 60 or so years in LA. GA4P gives a succinct GA version of the inertia tensor as a mapping from bivectors to bivectors, But it leaves the calculation of the principal axes to a discussion on symmetric matrices. It seems to me that you could think about strategies for diagonalizing a symmetric matrix in R(n,n), as mentioned above. I have a recent short note in the questions channel on Enki’s discord website for the SVD. I find it easier to contemplate sandwiches with rotors c+seij that n–by-n dimensional rotation matrices. And of course reflections are more succinctly described in GA than using Householder matrices. In short, I find that GA helps understanding, but number crunching may require other strategies. Hope this helps.

1 Like

I should add that when dealing with bivectors, the so-called invariant decomposition is very useful for understanding.

@Leo_Kovacic1 I’m interested in this topic (just out of curiosity and still beginner in GA, so I try to learn).

I did not really understand your formula for the inertia tensor.
Did you mean T(U) = {\bf a} ({\bf a} \wedge U) + {\bf b} ({\bf b} \wedge U) + {\bf c} ({\bf c} \wedge U)? If U is a bivector this leads to a vector geometrically multiplied with a 3-vector. Therefore it gives a 4-vector and bivector contribution in more than 3 dimensions. Is this correct or did I miss something?

For the tensor of inertia at an edge there is a formula in GA4Ph which is equivalent to Steiner’s theorem: The tensor at a position {\bf a} in relation to the center of mass is given by

T_{\bf a}(B) = T_{\rm com}(B) + M {\bf a} \wedge ({\bf a}\cdot B)

where B is a bivector.

@wfmcgee could you please elaborate on the “invariant decomposition”? What does it mean? Is it the decomposition of a geometric product of bivectors into different parts
AB = A\cdot B + [A,B] + A\wedge B?

Thanks in advance! :smiley:

Yes that’s the formula , but you can use the cross product instead , for example a tensor of inertia for a uniform rod about its end is Lx(UxL)/3
So it is a linear function that describes the properties of the rod . Namely for any angular velocity vector U , it gives the angular momentum vector.

One can easily using the symmetry arguments come up with principal vectors (eigenvectors) of such tensors , but as GA is advertised as the algebra of such geometric objects ,so I’d think there would be some intrinsic way of doing a formal computation , without having to resort to matrices…

Yes that last formula is very useful , but I don’t know if there is a way of computing eigenvectors for any such tensor in GA

@Leo_Kovacic1 thanks for your fast response. I tried to calculate your formula above via decomposing all the quantities into basis elements (yeah I know this is bad practice, but I am still learning :smiley:). I started from the formula (GA4Ph):

I(B) = \int d^3 x\,\rho {\bf x} \wedge ({\bf x} \cdot B)

where I decomposed {\bf x} = x_i {\bf e}_i, B = \frac{1}{2} B_k \epsilon_{k\ell m} e_{\ell m} (this is maybe where your cross product comes from) and I used the summation convention. This leads at the end to the formula

I(B) = \int d^3 x\, \rho x_i x_j B_k \epsilon_{kjm} e_{im}

Here comes my point. If we integrate over a cuboid (i.e. \int d^3 x = \int_{-a/2}^{a/2} dx \int_{-b/2}^{b/2} dy \int_{-c/2}^{c/2} dz) then for constant \rho the integral

\int d^3 x\,\rho x_i x_j = D_{ij}

gives the diagonal matrix D_{ij} but cannot be factorized into a product of vectors. So I cannot rearrange the whole stuff in terms of B without the decomposition. Or did I miss something?

On the other hand, I could reproduce a formula, like your’s above, when I assume that \rho = m \sum_a \delta({\bf x} - {\bf x}_a). Then the integral above gives

I(B) = m \sum_a {\bf x}_a \wedge ({\bf x}_a \cdot B)

Btw. changing the order of \cdot and \wedge in the product leads to another summand in the formula. This is a nice formula:

{\bf a}{\bf a}B = a^2 B = \underbrace{({\bf a}\cdot({\bf a}\cdot B))}_{=0} + ({\bf a}\cdot({\bf a}\wedge B)) + ({\bf a}\wedge ({\bf a}\cdot B)) + \underbrace{({\bf a}\wedge({\bf a}\wedge B))}_{=0} = ({\bf a}\cdot({\bf a}\wedge B)) + ({\bf a}\wedge ({\bf a}\cdot B))

Back on topic: at the end, for a continous cube one gets something proportional to D_{ij} \sim \delta_{ij} such that
I(B) = c B. For which reading of the eigenvalues is trivial and they are degenerated such that B is arbitrary. For the more general situation

m \sum_i {\bf x}_a \wedge ({\bf x}_a \cdot B) = \lambda B

I have no solution. But maybe we could adapt the approach for linear equations M {\bf x} = {\bf b} by rewriting the columns of M into vectors {\bf v}_i such that the system of equations can be written as

\sum_i x_i {\bf v}_i = {\bf b}

Now one can use multiple wedge products to isolate the x_i and therefore solve the system directly (given the product \bigwedge_i {\bf v}_i \ne 0). For the eigenvalue problem the only solution I found was to write

\sum_i x_i ({\bf v}_i - \lambda {\bf e}_i) = 0

where the solution goes along the lines above and leads to the values of \lambda (here we want \bigwedge_i ({\bf v}_i - \lambda {\bf e}_i) = 0) and via the x_i to the eigenvectors. I don’t know whether we could use this for mappings between bivectors. But decomposing them into basis elements let us only arrive at the usual matrix formalism.

Sorry for the delay. The invariant decomposition of a bivector in a space of dimension n is an expression of the bivector as a sum of at most n/2 commuting and hopefully simple (ie squaring to a scalar) bivector terms. This means that the exponent of the bivector may be written as a product of n/2 commuting factors… Martin Roelfs is giving a talk on this at the GAME2023 conference this very week and I hope to hear about the latest and greatest on this subject then. My suspicion is that the simple requirement may be replaced with the cube of the term being proportional to the term itself (ie, B*3 proportional to B), or squaring to zero. This leads to rotors involving up to quadratic terms in the bivector… Prof Leo Dorst has studied this a few years ago.

You may wonder why if B^3=B, B^2 is not a scalar. The reason is that B may have an idempotent factor which ensures, if so, that B has no inverse.

Ok but how exactly does that pertain to the eigenvalue problem , particularly in 3-d when all bivectors should be simple anyway…

So if there is some linear function of vectors or bivectors , how exactly does one calculate its eigenvalues .