GitHub / FMInference / FlexLLMGen
Running large language models on a single GPU for throughput-oriented scenarios.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FMInference%2FFlexLLMGen
Stars: 9,316
Forks: 568
Open issues: 58
License: apache-2.0
Language: Python
Size: 37.1 MB
Dependencies parsed at: Pending
Created at: over 2 years ago
Updated at: 20 days ago
Pushed at: 7 months ago
Last synced at: 19 days ago
Commit Stats
Commits: 94
Authors: 17
Mean commits per author: 5.53
Development Distribution Score: 0.479
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/FMInference/FlexLLMGen
Topics: deep-learning, gpt-3, high-throughput, large-language-models, machine-learning, offloading, opt