Ecosyste.ms: Repos
An open API service providing repository metadata for many open source software ecosystems.
GitHub / microsoft / Olive
Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
JSON API: https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FOlive
Stars: 1,250
Forks: 134
Open Issues: 28
License: mit
Language: Python
Repo Size: 88.5 MB
Dependencies:
240
Created: almost 5 years ago
Updated: 13 days ago
Last pushed: 18 days ago
Last synced: 18 days ago
Commit Stats
Commits: 795
Authors: 35
Mean commits per author: 22.71
Development Distribution Score: 0.795
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/microsoft/Olive
Files
Dependencies
- actions/checkout v3 composite
- github/codeql-action/analyze v2 composite
- github/codeql-action/autobuild v2 composite
- github/codeql-action/init v2 composite
- mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04 latest build
- mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04 latest build
- mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04 latest build
- apache_beam *
- datasets *
- evaluate *
- neural-compressor *
- onnxruntime *
- optimum *
- scikit-learn *
- tabulate *
- transformers *
- apache_beam *
- datasets *
- evaluate *
- neural-compressor *
- onnxruntime-gpu *
- optimum *
- scikit-learn *
- tabulate *
- transformers *
- autodoc_pydantic <2.0.0
- azure-ai-ml >=0.1.0b6
- azure-identity *
- azureml-fsspec *
- docker *
- myst_parser *
- onnxconverter_common *
- psutil *
- pytorch_lightning *
- sphinx >=6.1.3
- sphinx-rtd-theme *
- sphinx-tabs *
- sphinxcontrib-jquery *
- azure-ai-ml *
- azure-identity *
- datasets *
- docker *
- evaluate *
- neural-compressor *
- scikit-learn *
- scipy *
- tabulate *
- transformers *
- accelerate *
- diffusers *
- invisible-watermark *
- onnx *
- onnxruntime-directml >=1.15.0
- optimum *
- pillow *
- protobuf ==3.20.3
- torch ==1.13.1
- torchvision ==0.14.1
- transformers *
- onnxruntime-gpu >=1.15.1
- optimum >=1.11.0
- torch >=2.0.0
- transformers >=4.31.0
- azure-ai-ml *
- azure-identity *
- azureml-fsspec *
- pytorch-lightning *
- scipy *
- tabulate *
- torchvision *
- azure-ai-ml *
- azure-identity *
- azureml-fsspec *
- onnxruntime <=1.15.1
- pytorch-lightning *
- scipy *
- tabulate *
- torchvision *
- neural-compressor *
- onnx ==1.14.0
- onnxruntime >=1.15.0
- onnxruntime-extensions ==0.8.0
- tabulate *
- torch >=1.13.1
- transformers >=4.23.1
- addict * development
- black * development
- flake8 * development
- pre-commit * development
- pytest * development
- numpy *
- onnx *
- optuna *
- pandas *
- protobuf <4.0.0
- pydantic <2.0.0
- pyyaml *
- torch *
- torchmetrics *
- transformers *
- datasets * test
- olive-ai * test
- onnxconverter_common * test
- torchvision * test
- transformers * test
- accelerate * test
- azure-ai-ml * test
- azure-identity * test
- azure-storage-blob * test
- azureml-evaluate-mlflow >=0.0.14 test
- azureml-fsspec * test
- coverage * test
- datasets * test
- docker * test
- evaluate * test
- neural-compressor * test
- onnxconverter_common * test
- onnxruntime-extensions * test
- openvino ==2022.3.0 test
- openvino-dev ==2022.3.0 test
- optimum * test
- pandas * test
- peft * test
- plotly * test
- protobuf ==3.20.3 test
- psutil * test
- pytorch_lightning * test
- sentencepiece * test
- tabulate * test
- torchvision * test
- azure-ai-ml >=0.1.0b6
- azure-identity *
- azureml-fsspec *
- actions/checkout v4 composite
- actions/setup-python v4 composite
- github/codeql-action/upload-sarif v2 composite
- reviewdog/action-misspell v1 composite
- reviewdog/action-shellcheck v1 composite
- accelerate *
- diffusers *
- invisible-watermark *
- onnx *
- optimum *
- pillow *
- protobuf ==3.20.3
- torch *
- transformers *
- accelerate *
- bitsandbytes *
- peft *
- scikit-learn *
- sentencepiece *
- datasets >=2.8.0
- onnx >=1.14.0
- protobuf ==3.20.2
- transformers >=4.33.2
- accelerate ==0.23.0
- bitsandbytes ==0.41.1
- optimum *
- peft *
- scikit-learn *
- torch-ort *
- datasets *
- neural-compressor >=2.3
- onnxruntime *
- sentencepiece *
- transformers *
- datasets *
- einops *
- sentencepiece *
- transformers ==4.34.1
- bitsandbytes * test
- datasets *
- neural-compressor >=2.4.1
- onnxruntime-extensions *
- optimum >=1.14.1
- tabulate *
- transformers >=4.34.99
- packaging *
- pillow *
- scipy *
- torchvision *
- accelerate *
- diffusers *
- onnx *
- pillow *
- protobuf ==3.20.3
- tabulate *
- torch *
- transformers *
- Pygments *
- huggingface-hub *
- markdown *
- mdtex2html *
- neural-compressor *
- optimum *
- protobuf ==3.20.3
- sentencepiece *
- tabulate *
- torch *