#machine-learning #simd #vectorized #function #mathml #ml-model

rten-vecmath

SIMD vectorized implementations of various math functions used in ML models

9 releases (breaking)

new 0.10.0 May 25, 2024
0.8.0 Apr 29, 2024
0.6.0 Mar 31, 2024
0.1.0 Dec 31, 2023

#5 in #ml-model

Download history 61/week @ 2024-02-02 73/week @ 2024-02-09 78/week @ 2024-02-16 140/week @ 2024-02-23 179/week @ 2024-03-01 111/week @ 2024-03-08 171/week @ 2024-03-15 224/week @ 2024-03-22 915/week @ 2024-03-29 96/week @ 2024-04-05 260/week @ 2024-04-12 59/week @ 2024-04-19 367/week @ 2024-04-26 164/week @ 2024-05-03 259/week @ 2024-05-10 377/week @ 2024-05-17

1,178 downloads per month
Used in 5 crates (via rten)

MIT/Apache

89KB
2K SLoC

rten-vecmath

This crate provides portable SIMD types that abstract over SIMD intrinsics on different architectures. Unlike std::simd this works on stable Rust. There is also functionality to detect the available instructions at runtime and dispatch to the optimal implementation.

This crate also contains SIMD-vectorized versions of math functions such as exp, erf, tanh, softmax etc. that are performance-critical in machine-learning models.


lib.rs:

SIMD-vectorized implementations of various math functions that are commonly used in neural networks.

For each function in this library there are multiple variants, which typically include:

  • A version that operates on scalars
  • A version that reads values from an input slice and writes to the corresponding position in an equal-length output slice. These have a vec_ prefix.
  • A version that reads values from a mutable input slice and writes the computed values back in-place. These have a vec_ prefix and _in_place suffix.

All variants use the same underlying implementation and should have the same accuracy.

See the source code for comments on accuracy.

Dependencies