Code Generator Tools

Is there any chance of increasing the limits on the code generator? My first priority is R700 and R070 but would also be interesting in dimension 8.

Grassmann.jl easily supports working with all those

julia> using Grassmann

julia> Ī›(8).v1, Ī›(8).v2
(v₁, vā‚‚)

julia> Ī›(S"--------").v1, Ī›(S"--------").v2
(v₁, vā‚‚)

8 dimensions are no problem at all for Grassmann.jl

If you are into Rust, you might try out GitHub - grumyantsev/tclifford: Geometric algebra crate for Rust
It’s very unfinished as of now though. I’ll make a proper announcement about this library soon.

declare_algebra!(Cl8, [+,+,+,+,+,+,+,+]);
type MV = Multivector<f64, Cl8>;
let e: [MV; 8] = MV::basis();
let a: MV = (e[0].fft() * e[1].fft()).ifft(); // FFT for fast multiplication in large dimensions!

Thanks for the offer, You may not believe this, but I use Pascal, and this is an easy modification of the cpp files. (replace = with :=). So I prefer cpp.

Is your library using matrix representations to compute the geometric product?

Grassmann.jl doesn’t need any matrix representations for geometric products and it can even do geometric products in 16 dimensional algebras without crashing and it doesn’t take long to compile.

Oh, you wanna have a competition? :slight_smile:
How about multiplication of dense 20-dimensional multivectors? Dense mean that every basis blade has a non-zero coefficient in front of it.

fn challenge() {
    declare_algebra!(Cl20, [+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+]);

    let mut max_duration = time::Duration::new(0, 0);
    let mut total_duration = time::Duration::new(0, 0);

    for _ in 0..100 {
        // generate random multivectors
        let a = Multivector::<f64, Cl20>::from_indexed_iter(
            Cl20::index_iter().map(|idx| (idx, rand::random())),
        )
        .unwrap();
        let b = Multivector::<f64, Cl20>::from_indexed_iter(
            Cl20::index_iter().map(|idx| (idx, rand::random())),
        )
        .unwrap();

        // measure the time
        let start = time::Instant::now();
        let _ = black_box(a.fft() * b.fft()).ifft::<f64>();
        let duration = start.elapsed();
        max_duration = max_duration.max(duration);
        total_duration += duration;
    }
    println!(
        "duration avg: {:?}; max: {:?}",
        total_duration / 100,
        max_duration
    );
}

Result:

duration avg: 472.204881ms; max: 626.331815ms

I couldn’t care less about algebras higher than 5 dimensions, I only have implementations for higher dimensions for fun, I never needed it for any real practical calculations, so I don’t really maintain the functionality for high dimensions since I originally implemented, I stopped caring about maintaining code for algebras in such high dimensions.

Good for you that you can do 20 dimensions so efficiently, I’m certainly not competing about higher dimensional algebras, it’s only for fun and I don’t have a need for maintaining that feature.

For the algebras I am working with, there is no need for relying on matrix representations to compute the geometric product, I can do the 5 dimensional dense geometric product in less than 200 nanoseconds, and the 3 dimensional in less than 10 nanoseconds, how does it compare to yours?

Do you want to do a performance comparison of the 5 dimensional geometric product?

julia> using Grassmann

julia> basis"5";

julia> a = rand(Multivector{V});

julia> b = rand(Multivector{V});

julia> @btime $a*$b
  190.108 ns (0 allocations: 0 bytes)

For 3 dimensions this test only takes 9 nanoseconds

julia> basis"3";

julia> a = rand(Multivector{V});

julia> b = rand(Multivector{V});

julia> @btime $a*$b
  9.076 ns (0 allocations: 0 bytes)

Now lets see if your 5 dimensional geometric product for dense multivectors is less than 200 nanoseconds, or your 3 dimensional product is less than 10 nanoseconds.

And if it’s indeed faster than mine, good for you, I’m really not competing about it.

I don’t need matrix representations either, they are just asymptotically better at high dimensions.
But 200ns for 5d is actually impressive. I got 8 µs. But I do have a few ideas for low-dimension optimizations, guess I better start implementing them :slight_smile:

Also, here is my dimension 8 bench mark, less than 5 miliseconds for a full multivector geometric product, doesnt involve any matrices at all

julia> using Grassmann

julia> basis"8";

julia> a = rand(Multivector{V});

julia> b = rand(Multivector{V});

julia> @btime $a*$b
  4.895 ms (242976 allocations: 8.92 MiB)

At 8, I already have the upper hand

duration avg: 61.784µs; max: 136.083µs

Perhaps someday I will update Grassmann.jl with the FFT approach for high dimensions, but it ranks extremely low on my priorities and is completely unnecessary for the things I’m working on currently, thanks for sharing.

1 Like

I realized that I can generate the code that I need for mul from the code at Clifford Algebra Explorer, and add, sub, and rev, involute, conjugate and transpose and smul, not sure I need all the rest.