GitHub / shikiw / Modality-Integration-Rate
[ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/shikiw%2FModality-Integration-Rate
PURL: pkg:github/shikiw/Modality-Integration-Rate
Stars: 100
Forks: 2
Open issues: 0
License: mit
Language: Python
Size: 17.7 MB
Dependencies parsed at: Pending
Created at: 10 months ago
Updated at: 20 days ago
Pushed at: 20 days ago
Last synced at: 20 days ago
Commit Stats
Commits: 14
Authors: 2
Mean commits per author: 7.0
Development Distribution Score: 0.5
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/shikiw/Modality-Integration-Rate
Topics: chatbot, gpt-4o, large-multimodal-models, llama, llava, multimodal, vision-language-learning, vision-language-model