GitHub / Coobiw / MPP-LLaVA
Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Coobiw%2FMPP-LLaVA
Stars: 420
Forks: 23
Open issues: 8
License: None
Language: Jupyter Notebook
Size: 73.1 MB
Dependencies parsed at: Pending
Created at: over 1 year ago
Updated at: about 1 month ago
Pushed at: about 1 month ago
Last synced at: about 1 month ago
Topics: deepspeed, fine-tuning, mllm, model-parallel, multimodal-large-language-models, pipeline-parallelism, pretraining, qwen, video-language-model, video-large-language-models