GitHub / MuhammadAliS / CLIP
PyTorch implementation of OpenAI's CLIP model for image classification, visual search, and visual question answering (VQA).
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MuhammadAliS%2FCLIP
PURL: pkg:github/MuhammadAliS/CLIP
Stars: 0
Forks: 0
Open issues: 0
License: mit
Language: Jupyter Notebook
Size: 15.7 MB
Dependencies parsed at: Pending
Created at: about 1 year ago
Updated at: 11 months ago
Pushed at: 11 months ago
Last synced at: 11 months ago
Topics: deep-neural-networks, huggingface, pytorch-implementation, transformers, visual-language-learning, visual-question-answering