GitHub / zer0int / CLIPInversion
What do we learn from inverting CLIP models? And what does a CLIP 'see' in an image?
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zer0int%2FCLIPInversion
Fork of hamidkazemi22/CLIPInversion
Stars: 2
Forks: 0
Open issues: 0
License: None
Language: Python
Size: 3.85 MB
Dependencies parsed at: Pending
Created at: 9 months ago
Updated at: 6 months ago
Pushed at: 6 months ago
Last synced at: 6 months ago
Topics: ai, clip, gradient-ascent, image-to-image, inversion, model, text-encoder, text-to-image, transformer, vision, xai