GitHub / inferless / llama-2-70b-chat-gptq
GPTQ quantized model fine-tuned for dialogue applications. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/inferless%2Fllama-2-70b-chat-gptq
PURL: pkg:github/inferless/llama-2-70b-chat-gptq
Stars: 0
Forks: 1
Open issues: 0
License: None
Language: Python
Size: 15.6 KB
Dependencies parsed at: Pending
Created at: over 1 year ago
Updated at: 6 months ago
Pushed at: 6 months ago
Last synced at: 6 months ago
Topics: generate-text