GitHub / WebDevSachin / Setup-for-Qwen3-Coder-RunPod-Deployment
Complete deployment solution for Qwen3-Coder (30B/480B) on RunPod with Ollama + LiteLLM proxy. Features secure OpenAI-compatible API endpoint with authentication, persistent storage configuration, automated backups, and VS Code integration. Perfect for AI-powered development workflows.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/WebDevSachin%2FSetup-for-Qwen3-Coder-RunPod-Deployment
PURL: pkg:github/WebDevSachin/Setup-for-Qwen3-Coder-RunPod-Deployment
Stars: 0
Forks: 0
Open issues: 0
License: apache-2.0
Language: Shell
Size: 17.6 KB
Dependencies parsed at: Pending
Created at: about 2 months ago
Updated at: about 2 months ago
Pushed at: about 2 months ago
Last synced at: about 2 months ago
Topics: 30b-model, 480b-model, ai-coding, ai-model, alibaba-llm, cline, code-generation, coding-llm, coding-llm-txt, gpu-deployment, litellm, ollama, openai-api, persistent-storage, qwen, qwen3-coder, runpod, secure-api, vscode