An open API service providing repository metadata for many open source software ecosystems.

Topic: "inference-optimization"

google/XNNPACK

High-efficiency floating-point neural network inference operators for mobile, server, and Web

Language: C - Size: 167 MB - Last synced at: 4 days ago - Pushed at: 4 days ago - Stars: 2,016 - Forks: 416

alibaba/BladeDISC

BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.

Language: C++ - Size: 21.2 MB - Last synced at: 3 days ago - Pushed at: 5 months ago - Stars: 864 - Forks: 164

jiazhihao/TASO

The Tensor Algebra SuperOptimizer for Deep Learning

Language: C++ - Size: 1.21 MB - Last synced at: about 10 hours ago - Pushed at: over 2 years ago - Stars: 711 - Forks: 94

mit-han-lab/inter-operator-scheduler

[MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration

Language: C++ - Size: 3.13 MB - Last synced at: 4 days ago - Pushed at: about 3 years ago - Stars: 199 - Forks: 33

imedslab/pytorch_bn_fusion 📦

Batch normalization fusion for PyTorch. This is an archived repository, which is not maintained.

Language: Python - Size: 54.7 KB - Last synced at: 3 days ago - Pushed at: about 5 years ago - Stars: 197 - Forks: 29

ZFTurbo/Keras-inference-time-optimizer

Optimize layers structure of Keras model to reduce computation time

Language: Python - Size: 77.1 KB - Last synced at: 8 days ago - Pushed at: almost 5 years ago - Stars: 157 - Forks: 18

Rapternmn/PyTorch-Onnx-Tensorrt

A set of tool which would make your life easier with Tensorrt and Onnxruntime. This Repo is designed for YoloV3

Language: Python - Size: 2.83 MB - Last synced at: about 1 year ago - Pushed at: over 5 years ago - Stars: 80 - Forks: 18

keli-wen/AGI-Study

The blog, read report and code example for AGI/LLM related knowledge.

Language: Python - Size: 19.5 MB - Last synced at: 5 days ago - Pushed at: 4 months ago - Stars: 37 - Forks: 1

lmaxwell/Armednn

cross-platform modular neural network inference library, small and efficient

Language: C++ - Size: 1.05 MB - Last synced at: almost 2 years ago - Pushed at: about 2 years ago - Stars: 13 - Forks: 2

ksm26/Efficiently-Serving-LLMs

Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.

Language: Jupyter Notebook - Size: 2.34 MB - Last synced at: about 2 months ago - Pushed at: about 1 year ago - Stars: 11 - Forks: 3

grazder/template.cpp

A template for getting started writing code using GGML

Language: C++ - Size: 40 KB - Last synced at: about 1 month ago - Pushed at: about 1 year ago - Stars: 9 - Forks: 0

ccs96307/fast-llm-inference

Accelerating LLM inference with techniques like speculative decoding, quantization, and kernel fusion, focusing on implementing state-of-the-art research papers.

Language: Jupyter Notebook - Size: 168 KB - Last synced at: 4 days ago - Pushed at: 4 days ago - Stars: 8 - Forks: 1

Harly-1506/Faster-Inference-yolov8

Faster inference YOLOv8: Optimize and export YOLOv8 models for faster inference using OpenVINO and Numpy 🔢

Language: Python - Size: 49.8 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 8 - Forks: 1

vbdi/divprune

[CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models

Language: Python - Size: 11 MB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 7 - Forks: 0

EZ-Optimium/Optimium

Your AI Catalyst: inference backend to maximize your model's inference performance

Language: C++ - Size: 101 MB - Last synced at: 25 days ago - Pushed at: 5 months ago - Stars: 5 - Forks: 0

Bisonai/ncnn Fork of Tencent/ncnn

Modified inference engine for quantized convolution using product quantization

Language: C++ - Size: 7.96 MB - Last synced at: about 1 year ago - Pushed at: almost 3 years ago - Stars: 4 - Forks: 0

amazon-science/mlp-rank-pruning

MLP-Rank: A graph theoretical approach to structured pruning of deep neural networks based on weighted Page Rank centrality as introduced by the related thesis.

Language: Python - Size: 60.5 KB - Last synced at: 14 days ago - Pushed at: about 1 year ago - Stars: 3 - Forks: 1

sjlee25/batch-partitioning

Batch Partitioning for Multi-PE Inference with TVM (2020)

Language: Python - Size: 3.79 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 3 - Forks: 0

zhliuworks/Fast-MobileNetV2

🤖️ Optimized CUDA Kernels for Fast MobileNetV2 Inference

Language: Cuda - Size: 15 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 3 - Forks: 1

piotrostr/infer-trt

Interface for TensorRT engines inference along with an example of YOLOv4 engine being used.

Language: Python - Size: 17.6 KB - Last synced at: 3 months ago - Pushed at: about 3 years ago - Stars: 2 - Forks: 0

kiritigowda/mivisionx-inference-analyzer

MIVisionX Python Inference Analyzer uses pre-trained ONNX/NNEF/Caffe models to analyze inference results and summarize individual image results

Language: Python - Size: 11.7 MB - Last synced at: about 1 month ago - Pushed at: over 4 years ago - Stars: 2 - Forks: 3

yester31/TensorRT_Examples

All useful sample codes of tensorrt models using onnx

Language: Python - Size: 240 KB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 1 - Forks: 1

Wb-az/YOLOv8-Image-detection

YOLOV8 - Object detection

Language: Jupyter Notebook - Size: 131 MB - Last synced at: about 1 year ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 2

aalbaali/LieBatch

Batch estimation on Lie groups

Language: MATLAB - Size: 3.5 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 1 - Forks: 1

effrosyni-papanastasiou/constrained-em

A constrained expectation-maximization algorithm for feasible graph inference.

Language: Jupyter Notebook - Size: 16.6 KB - Last synced at: about 2 years ago - Pushed at: almost 4 years ago - Stars: 1 - Forks: 0

cedrickchee/pytorch-mobile-ios Fork of pytorch/ios-demo-app

PyTorch Mobile: iOS examples

Size: 47.3 MB - Last synced at: about 1 year ago - Pushed at: over 5 years ago - Stars: 1 - Forks: 0

Keshavpatel2/local-llm-workbench

🧠 A comprehensive toolkit for benchmarking, optimizing, and deploying local Large Language Models. Includes performance testing tools, optimized configurations for CPU/GPU/hybrid setups, and detailed guides to maximize LLM performance on your hardware.

Language: Shell - Size: 8.79 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

shreyansh26/Accelerating-Cross-Encoder-Inference

Leveraging torch.compile to accelerate cross-encoder inference

Language: Python - Size: 3.84 MB - Last synced at: 2 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

OneAndZero24/TRTTL

TensorRT C++ Template Library

Language: C++ - Size: 423 KB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

matteo-stat/transformers-nlp-multi-label-classification

This repo provides scripts for fine-tuning HuggingFace Transformers, setting up pipelines and optimizing multi-label classification models for inference. They are based on my experience developing a custom chatbot, I’m sharing these in the hope they will help others to quickly fine-tune and use models in their projects! 😊

Language: Python - Size: 31.3 KB - Last synced at: 3 months ago - Pushed at: 9 months ago - Stars: 0 - Forks: 0

matteo-stat/transformers-nlp-ner-token-classification

This repo provides scripts for fine-tuning HuggingFace Transformers, setting up pipelines and optimizing token classification models for inference. They are based on my experience developing a custom chatbot, I’m sharing these in the hope they will help others to quickly fine-tune and use models in their projects! 😊

Language: Python - Size: 22.5 KB - Last synced at: 3 months ago - Pushed at: 9 months ago - Stars: 0 - Forks: 0

manickavela29/EmoTwitter

OnnxRT based Inference Optimization of Roberta model trained for Sentiment Analysis On Twitter Dataset

Language: Jupyter Notebook - Size: 12.7 KB - Last synced at: 11 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

ankdeshm/inference-optimization

A compilation of various ML and DL models and ways to optimize the their inferences.

Language: Jupyter Notebook - Size: 6.17 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

cedrickchee/pytorch-mobile-android Fork of pytorch/android-demo-app

PyTorch Mobile: Android examples of usage in applications

Size: 53 MB - Last synced at: about 1 year ago - Pushed at: over 5 years ago - Stars: 0 - Forks: 1

ieee820/ncnn Fork of Tencent/ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform

Language: C++ - Size: 6.81 MB - Last synced at: almost 2 years ago - Pushed at: almost 6 years ago - Stars: 0 - Forks: 0

goshaQ/inference-optimizer

A simple tool that applies structure-level optimizations (e.g. Quantization) to a TensorFlow model

Language: Python - Size: 6.84 KB - Last synced at: almost 2 years ago - Pushed at: almost 7 years ago - Stars: 0 - Forks: 1

Related Topics
deep-learning 8 onnx 6 pytorch 5 neural-network 5 machine-learning 5 tensorrt 4 onnxruntime 4 object-detection 3 inference 3 inference-engine 3 quantization 3 yolov8 2 cpp 2 ultralytics 2 acceleration 2 deep-neural-networks 2 openvino-toolkit 2 libtorch 2 edge-ai 2 llm 2 pruning 2 pytorch-mobile 2 cpu 2 cuda 2 fine-tuning 2 huggingface 2 transformers 2 huggingface-pipelines 2 tensorflow 2 computer-vision 2 nlp 2 huggingface-transformers 2 amd 2 token-pruning 1 multimodal-large-language-models 1 vision-language-model 1 centrality-measures 1 graph-theory 1 multilayer-perceptron 1 pagerank 1 structured-sparsity 1 weighted-pagerank 1 rocm 1 g2o 1 pandas 1 ray-tune 1 tensorflow-models 1 edge-machine-learning 1 inference-acceleration 1 mobile-deep-learning 1 product-quantization 1 context-window-scaling 1 cpu-inference 1 gpu-acceleration 1 hybrid-inference 1 llama-cpp 1 llm-benchmarking 1 llm-deployment 1 local-llm 1 model-management 1 model-quantization 1 ollama-optimization 1 wsl-ai-setup 1 llava 1 multi-modality 1 squeezenet 1 vgg 1 compiler 1 mlir 1 batch-normalization 1 keras 1 nvidia 1 template-library 1 convolutional-neural-network 1 convolutional-neural-networks 1 matrix-multiplication 1 mobile-inference 1 multithreading 1 neural-networks 1 simd 1 code-examples 1 demo 1 train 1 large-language-models 1 speculative-decoding 1 named-entity-recognition 1 ner 1 token-classification 1 multi-label-classification 1 text-classification 1 cross-encoder 1 jina 1 mlsys 1 torch-compile 1 amdgpu 1 caffe 1 docker-images 1 inceptionv4 1 mivisionx 1 mivisionx-inference-analyzer 1