Is there any chance of increasing the limits on the code generator? My first priority is R700 and R070 but would also be interesting in dimension 8.
Grassmann.jl easily supports working with all those
julia> using Grassmann
julia> Ī(8).v1, Ī(8).v2
(vā, vā)
julia> Ī(S"--------").v1, Ī(S"--------").v2
(vā, vā)
8 dimensions are no problem at all for Grassmann.jl
If you are into Rust, you might try out GitHub - grumyantsev/tclifford: Geometric algebra crate for Rust
Itās very unfinished as of now though. Iāll make a proper announcement about this library soon.
declare_algebra!(Cl8, [+,+,+,+,+,+,+,+]);
type MV = Multivector<f64, Cl8>;
let e: [MV; 8] = MV::basis();
let a: MV = (e[0].fft() * e[1].fft()).ifft(); // FFT for fast multiplication in large dimensions!
Thanks for the offer, You may not believe this, but I use Pascal, and this is an easy modification of the cpp files. (replace = with :=). So I prefer cpp.
Is your library using matrix representations to compute the geometric product?
Grassmann.jl doesnāt need any matrix representations for geometric products and it can even do geometric products in 16 dimensional algebras without crashing and it doesnāt take long to compile.
Oh, you wanna have a competition?
How about multiplication of dense 20-dimensional multivectors? Dense mean that every basis blade has a non-zero coefficient in front of it.
fn challenge() {
declare_algebra!(Cl20, [+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+,+]);
let mut max_duration = time::Duration::new(0, 0);
let mut total_duration = time::Duration::new(0, 0);
for _ in 0..100 {
// generate random multivectors
let a = Multivector::<f64, Cl20>::from_indexed_iter(
Cl20::index_iter().map(|idx| (idx, rand::random())),
)
.unwrap();
let b = Multivector::<f64, Cl20>::from_indexed_iter(
Cl20::index_iter().map(|idx| (idx, rand::random())),
)
.unwrap();
// measure the time
let start = time::Instant::now();
let _ = black_box(a.fft() * b.fft()).ifft::<f64>();
let duration = start.elapsed();
max_duration = max_duration.max(duration);
total_duration += duration;
}
println!(
"duration avg: {:?}; max: {:?}",
total_duration / 100,
max_duration
);
}
Result:
duration avg: 472.204881ms; max: 626.331815ms
I couldnāt care less about algebras higher than 5 dimensions, I only have implementations for higher dimensions for fun, I never needed it for any real practical calculations, so I donāt really maintain the functionality for high dimensions since I originally implemented, I stopped caring about maintaining code for algebras in such high dimensions.
Good for you that you can do 20 dimensions so efficiently, Iām certainly not competing about higher dimensional algebras, itās only for fun and I donāt have a need for maintaining that feature.
For the algebras I am working with, there is no need for relying on matrix representations to compute the geometric product, I can do the 5 dimensional dense geometric product in less than 200 nanoseconds, and the 3 dimensional in less than 10 nanoseconds, how does it compare to yours?
Do you want to do a performance comparison of the 5 dimensional geometric product?
julia> using Grassmann
julia> basis"5";
julia> a = rand(Multivector{V});
julia> b = rand(Multivector{V});
julia> @btime $a*$b
190.108 ns (0 allocations: 0 bytes)
For 3 dimensions this test only takes 9 nanoseconds
julia> basis"3";
julia> a = rand(Multivector{V});
julia> b = rand(Multivector{V});
julia> @btime $a*$b
9.076 ns (0 allocations: 0 bytes)
Now lets see if your 5 dimensional geometric product for dense multivectors is less than 200 nanoseconds, or your 3 dimensional product is less than 10 nanoseconds.
And if itās indeed faster than mine, good for you, Iām really not competing about it.
I donāt need matrix representations either, they are just asymptotically better at high dimensions.
But 200ns for 5d is actually impressive. I got 8 µs. But I do have a few ideas for low-dimension optimizations, guess I better start implementing them
Also, here is my dimension 8 bench mark, less than 5 miliseconds for a full multivector geometric product, doesnt involve any matrices at all
julia> using Grassmann
julia> basis"8";
julia> a = rand(Multivector{V});
julia> b = rand(Multivector{V});
julia> @btime $a*$b
4.895 ms (242976 allocations: 8.92 MiB)
At 8, I already have the upper hand
duration avg: 61.784µs; max: 136.083µs
Perhaps someday I will update Grassmann.jl with the FFT approach for high dimensions, but it ranks extremely low on my priorities and is completely unnecessary for the things Iām working on currently, thanks for sharing.
I realized that I can generate the code that I need for mul from the code at Clifford Algebra Explorer, and add, sub, and rev, involute, conjugate and transpose and smul, not sure I need all the rest.