GitHub / NotYuSheng / Multimodal-Large-Language-Model
Localized Multimodal Large Language Model (MLLM) integrated with Streamlit and Ollama for text and image processing tasks.
Stars: 4
Forks: 2
Open issues: 0
License: mit
Language: Python
Size: 7.37 MB
Dependencies parsed at: Pending
Created at: 12 months ago
Updated at: 28 days ago
Pushed at: 28 days ago
Last synced at: 19 days ago
Topics: docker, large-language-models, llava, llm, multimodal, multimodal-large-language-models, ollama, pretrained, python, sphinx-doc, streamlit
Loading...