GitHub / OpenSparseLLMs / LLaMA-MoE-v2
🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenSparseLLMs%2FLLaMA-MoE-v2
Stars: 78
Forks: 11
Open issues: 3
License: apache-2.0
Language: Python
Size: 2.21 MB
Dependencies parsed at: Pending
Created at: 5 months ago
Updated at: 20 days ago
Pushed at: 5 months ago
Last synced at: 19 days ago
Topics: attention, fine-tuning, instruction-tuning, llama, llama3, mixture-of-experts, moe, sft, sparsity