GitHub / k9ele7en / Triton-TensorRT-Inference-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Stars: 25
Forks: 6
Open issues: 1
License: bsd-3-clause
Language: Python
Size: 15.5 MB
Dependencies parsed at: Pending
Created at: almost 4 years ago
Updated at: over 2 years ago
Pushed at: almost 4 years ago
Last synced at: about 2 years ago
Topics: inference, inference-engine, inference-server, nvidia-docker, onnx, onnx-torch, pytorch, tensorrt, tensorrt-conversion, text-detection, text-detection-from-image, triton-inference-server