GitHub / SarthakGarg19 / Accelerating-Inference-in-Tensorflow-using-TensorRT
TensorRT optimises any Deep Learning model by not only making it lightweight but also by accelerating its inference speed with an idea to extract every ounce of performance from the model, making it perfect to be deployed at the edge. This repository helps you convert any Deep Learning model from TensorFlow to TensorRT!
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SarthakGarg19%2FAccelerating-Inference-in-Tensorflow-using-TensorRT
PURL: pkg:github/SarthakGarg19/Accelerating-Inference-in-Tensorflow-using-TensorRT
Stars: 2
Forks: 1
Open issues: 0
License: None
Language: Jupyter Notebook
Size: 32 MB
Dependencies parsed at: Pending
Created at: almost 6 years ago
Updated at: almost 5 years ago
Pushed at: almost 6 years ago
Last synced at: over 2 years ago
Topics: deep-learning, edge-devices, tensorflow, tensorrt, tensorrt-conversion