An open API service providing repository metadata for many open source software ecosystems.

GitHub / intel / neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fneural-compressor
PURL: pkg:github/intel/neural-compressor

Stars: 2,491
Forks: 281
Open issues: 48

License: apache-2.0
Language: Python
Size: 468 MB
Dependencies parsed at: Pending

Created at: about 5 years ago
Updated at: 4 days ago
Pushed at: 4 days ago
Last synced at: 3 days ago

Commit Stats

Commits: 3588
Authors: 125
Mean commits per author: 28.7
Development Distribution Score: 0.908
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/intel/neural-compressor

Topics: auto-tuning, awq, fp4, gptq, int4, int8, knowledge-distillation, large-language-models, low-precision, mxformat, post-training-quantization, pruning, quantization, quantization-aware-training, smoothquant, sparsegpt, sparsity

    Loading...