@Leo_Kovacic1 thanks for your fast response. I tried to calculate your formula above via decomposing all the quantities into basis elements (yeah I know this is bad practice, but I am still learning ). I started from the formula (GA4Ph):

I(B) = \int d^3 x\,\rho {\bf x} \wedge ({\bf x} \cdot B)

where I decomposed {\bf x} = x_i {\bf e}_i, B = \frac{1}{2} B_k \epsilon_{k\ell m} e_{\ell m} (this is maybe where your cross product comes from) and I used the summation convention. This leads at the end to the formula

I(B) = \int d^3 x\, \rho x_i x_j B_k \epsilon_{kjm} e_{im}

Here comes my point. If we integrate over a cuboid (i.e. \int d^3 x = \int_{-a/2}^{a/2} dx \int_{-b/2}^{b/2} dy \int_{-c/2}^{c/2} dz) then for constant \rho the integral

\int d^3 x\,\rho x_i x_j = D_{ij}

gives the diagonal matrix D_{ij} but cannot be factorized into a product of vectors. So I cannot rearrange the whole stuff in terms of B without the decomposition. Or did I miss something?

On the other hand, I could reproduce a formula, like yourâ€™s above, when I assume that \rho = m \sum_a \delta({\bf x} - {\bf x}_a). Then the integral above gives

I(B) = m \sum_a {\bf x}_a \wedge ({\bf x}_a \cdot B)

Btw. changing the order of \cdot and \wedge in the product leads to another summand in the formula. This is a nice formula:

{\bf a}{\bf a}B = a^2 B
= \underbrace{({\bf a}\cdot({\bf a}\cdot B))}_{=0} + ({\bf a}\cdot({\bf a}\wedge B)) + ({\bf a}\wedge ({\bf a}\cdot B)) + \underbrace{({\bf a}\wedge({\bf a}\wedge B))}_{=0} = ({\bf a}\cdot({\bf a}\wedge B)) + ({\bf a}\wedge ({\bf a}\cdot B))

Back on topic: at the end, for a continous cube one gets something proportional to D_{ij} \sim \delta_{ij} such that

I(B) = c B. For which reading of the eigenvalues is trivial and they are degenerated such that B is arbitrary. For the more general situation

m \sum_i {\bf x}_a \wedge ({\bf x}_a \cdot B) = \lambda B

I have no solution. But maybe we could adapt the approach for linear equations M {\bf x} = {\bf b} by rewriting the columns of M into vectors {\bf v}_i such that the system of equations can be written as

\sum_i x_i {\bf v}_i = {\bf b}

Now one can use multiple wedge products to isolate the x_i and therefore solve the system directly (given the product \bigwedge_i {\bf v}_i \ne 0). For the eigenvalue problem the only solution I found was to write

\sum_i x_i ({\bf v}_i - \lambda {\bf e}_i) = 0

where the solution goes along the lines above and leads to the values of \lambda (here we want \bigwedge_i ({\bf v}_i - \lambda {\bf e}_i) = 0) and via the x_i to the eigenvectors. I donâ€™t know whether we could use this for mappings between bivectors. But decomposing them into basis elements let us only arrive at the usual matrix formalism.