An open API service providing repository metadata for many open source software ecosystems.

GitHub / RahulSChand / gpu_poor

Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RahulSChand%2Fgpu_poor
PURL: pkg:github/RahulSChand/gpu_poor

Stars: 1,335
Forks: 77
Open issues: 8

License: None
Language: JavaScript
Size: 1.56 MB
Dependencies parsed at: Pending

Created at: almost 2 years ago
Updated at: 7 days ago
Pushed at: 8 months ago
Last synced at: 6 days ago

Commit Stats

Commits: 59
Authors: 3
Mean commits per author: 19.67
Development Distribution Score: 0.271
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/RahulSChand/gpu_poor

Topics: ggml, gpu, huggingface, language-model, llama, llama2, llamacpp, llm, pytorch, quantization

    Loading...