GitHub / light-magician / rust-ml-inference
A library for running GPU accelerated Neural Networks on the web via webassembly. Train your Neural Net in PyTorch, convert it to ONNX, then do GPU accelerated inference on it via rust WASM.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/light-magician%2Frust-ml-inference
Stars: 0
Forks: 0
Open issues: 0
License: mit
Language: Jupyter Notebook
Size: 349 KB
Dependencies parsed at: Pending
Created at: 7 months ago
Updated at: 4 months ago
Pushed at: 4 months ago
Last synced at: about 1 month ago
Topics: llm-inference, neural-network, onnx, onnx-runtime, onnx-torch, pytorch, rust, webassembly