GitHub topics: visual-language-action-model
NarutoOG/2025-1
🛠️ Build and explore Compiladores 1 materials for Software Engineering, including tasks, discussions, and course resources for the 2025 semester.
Size: 1.29 MB - Last synced at: 7 days ago - Pushed at: 7 days ago - Stars: 1 - Forks: 0
SpatialVLA/SpatialVLA
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
Language: Python - Size: 4.97 MB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 245 - Forks: 11