An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: tensorrt-inference-server

DataXujing/TensorRT_CV

:rocket::rocket::rocket:NVIDIA TensorRT 加速推断教程!

Language: CSS - Size: 67 MB - Last synced at: 4 months ago - Pushed at: almost 4 years ago - Stars: 134 - Forks: 20

chiehpower/Setup-deeplearning-tools

Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.

Language: Python - Size: 4.7 MB - Last synced at: 4 months ago - Pushed at: almost 2 years ago - Stars: 43 - Forks: 6

rmccorm4/TRTIS-Go-Client

🖧 Simple gRPC client in Go to communicate with TensorRT Inference Server

Language: Go - Size: 1.12 MB - Last synced at: about 2 months ago - Pushed at: almost 6 years ago - Stars: 5 - Forks: 0

cap-ntu/ML-Model-CI

MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.

Language: Python - Size: 5.37 MB - Last synced at: almost 2 years ago - Pushed at: over 2 years ago - Stars: 185 - Forks: 35