GitHub / RATHOD-SHUBHAM / SegmentAnything
SAM is a deep learning model (transformer based). When we give an image as input to the Segment Anything Model, it first passes through an image encoder and produces a one-time embedding for the entire image. The downsampling happens using 2D convolutional layers. Then the model concatenates it with the image embedding to get the final vector.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RATHOD-SHUBHAM%2FSegmentAnything
PURL: pkg:github/RATHOD-SHUBHAM/SegmentAnything
Stars: 1
Forks: 0
Open issues: 0
License: None
Language: Jupyter Notebook
Size: 30.2 MB
Dependencies parsed at: Pending
Created at: about 2 years ago
Updated at: almost 2 years ago
Pushed at: almost 2 years ago
Last synced at: 5 months ago
Commit Stats
Commits: 23
Authors: 2
Mean commits per author: 11.5
Development Distribution Score: 0.261
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/RATHOD-SHUBHAM/SegmentAnything
Topics: deep-learning, encoder-decoder, instance-segmentation, neural-network, object-detection, sam, segment-anything, segment-anything-model, segmentation, segmentation-models, semantic-segmentation, transformer