GitHub topics: multi-modal-search
ThuyHaLE/FrameFinderLE
FrameFinderLE is an advanced image and video frame retrieval system that enhances CLIP's image-text pairing with hashtag refinement and user feedback, offering an intuitive search experience.
Language: Python - Size: 19 MB - Last synced at: 21 days ago - Pushed at: 21 days ago - Stars: 1 - Forks: 0

haofanwang/natural-language-joint-query-search
Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.
Language: Jupyter Notebook - Size: 12.9 MB - Last synced at: 2 months ago - Pushed at: over 3 years ago - Stars: 218 - Forks: 20

emmyoh/zebra
A vector database for querying meaningfully similar data.
Language: Rust - Size: 1.48 MB - Last synced at: about 2 months ago - Pushed at: 3 months ago - Stars: 16 - Forks: 0

Zabuzard/Cobweb
Cobweb is a multi-modal journey planner offering a server based REST API and a light frontend.
Language: Java - Size: 29.4 MB - Last synced at: about 1 month ago - Pushed at: almost 4 years ago - Stars: 14 - Forks: 3

cherrry-ai/cherrry-js
Cherrry Javascript SDK
Language: JavaScript - Size: 42 KB - Last synced at: about 1 month ago - Pushed at: over 2 years ago - Stars: 4 - Forks: 0

sagarverma/MultiModalSearch
CSE508-Information Retrieval course project on Multi modal search using deep learning.
Language: Python - Size: 105 MB - Last synced at: almost 2 years ago - Pushed at: about 9 years ago - Stars: 1 - Forks: 0
