Ecosyste.ms: Repos

An open API service providing repository metadata for many open source software ecosystems.

GitHub / intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.

JSON API: https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel-analytics%2Fipex-llm

Stars: 6,002
Forks: 1,202
Open Issues: 1,042

License: apache-2.0
Language: Python
Repo Size: 221 MB
Dependencies: 0

Created: over 7 years ago
Updated: about 2 hours ago
Last pushed: about 10 hours ago
Last synced: about 2 hours ago

Commit Stats

Commits: 18964
Authors: 206
Mean commits per author: 92.06
Development Distribution Score: 0.9
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/intel-analytics/ipex-llm

Topics: gpu, llm, pytorch, transformers

Files
    Loading...
    Readme
    Loading...

    No dependencies found