Skip to content

SIMD and Optimath #292

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Mec-iS opened this issue Jan 27, 2025 · 3 comments
Open

SIMD and Optimath #292

Mec-iS opened this issue Jan 27, 2025 · 3 comments
Labels
explorative not an actual issue. Just a thing to look into

Comments

@Mec-iS
Copy link
Collaborator

Mec-iS commented Jan 27, 2025

Consider improvements based on Optimath https://docs.rs/optimath/latest/optimath/

@Mec-iS Mec-iS added the explorative not an actual issue. Just a thing to look into label Jan 27, 2025
@Mec-iS
Copy link
Collaborator Author

Mec-iS commented Feb 28, 2025

@mxfactorial
Copy link

optimath and simd may incrementally boost your current linalg module performance

but its still O(n^3) code

if youre willing to venture beyond linear algebra and scalar calculus https://crates.io/crates/geonum achieves O(1) operations regardless of dimension

it consumes minimal memory since multivectors are represented with just 2 components (length and angle) instead of the 2^n components required by traditional geometric algebra

geonums machine_learning_test.rs suite demonstrates:

  • perceptron classification in 50,000D space with O(1) vs O(n) complexity
  • linear regression without expensive gram matrix computation
  • neural networks with O(1) forward/backward passes vs O(n²) matrix multiplications
  • clustering with O(1) distance calculations vs O(n) euclidean distances
  • dimensionality reduction without O(n³) eigendecomposition

at just 16 dimensions, geonum is 4300× faster than tensor implementation and maintains consistent ~78ns performance even in million-dimensional spaces

so while optimath/simd provides instruction-level parallelism for existing operations, geonum swaps them out with a more scalable design

encoding orthogonality relationships directly with angles instead of computing them repeatedly eliminates the computational bottleneck entirely (orders of magnitude improvement beyond whats possible with traditional optimizations)

try it out: https://github.com/mxfactorial/geonum/tree/develop?tab=readme-ov-file#learn-with-ai

@Mec-iS
Copy link
Collaborator Author

Mec-iS commented May 1, 2025

Thanks for this suggestion.

Geometric algebra is indeed interesting and it may be worthwhile considering a feature in smartcore for this. For sure it would be an option for users looking for something different than the current linear algebra options.

Something like:

  1. Add Feature Flag
smartcore = { ..., features = ["geoalgebra"] }
  1. Implement Adapter Layer
impl From<Geonum> for Array1<f64> {
    fn from(g: Geonum) -> Self {
        // Convert angle/length to coordinates
    }
}
  1. Algorithm Mods
  • LinearRegression → GeometricRegression
  • Kernel methods using wedge products
  • Angle-based clustering

I am not an expert on this but if you want to consider opening a PR I would be glad to learn, help and follow along.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
explorative not an actual issue. Just a thing to look into
Projects
None yet
Development

No branches or pull requests

2 participants