GitHub / notAI-tech / fastDeploy
Deploy DL/ ML inference pipelines with minimal extra code.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/notAI-tech%2FfastDeploy
PURL: pkg:github/notAI-tech/fastDeploy
Stars: 98
Forks: 17
Open issues: 0
License: mit
Language: Python
Size: 15.7 MB
Dependencies parsed at: Pending
Created at: over 5 years ago
Updated at: 2 months ago
Pushed at: 9 months ago
Last synced at: 6 days ago
Commit Stats
Commits: 430
Authors: 5
Mean commits per author: 86.0
Development Distribution Score: 0.326
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/notAI-tech/fastDeploy
Topics: deep-learning, docker, falcon, gevent, gunicorn, http-server, inference-server, model-deployment, model-serving, python, pytorch, serving, streaming-audio, tensorflow-serving, tf-serving, torchserve, triton, triton-inference-server, triton-server, websocket