Topic: "flash-linear-attention"
fla-org/flame
🔥 A minimal training framework for scaling FLA models
Language: Python - Size: 226 KB - Last synced at: 10 days ago - Pushed at: 10 days ago - Stars: 166 - Forks: 22

fla-org/fla-zoo
Flash-Linear-Attention models beyond language
Language: Python - Size: 13.3 MB - Last synced at: 3 days ago - Pushed at: 4 days ago - Stars: 16 - Forks: 1

kazuki-irie/hybrid-memory
Official repository for the paper "Blending Complementary Memory Systems in Hybrid Quadratic-Linear Transformers"
Language: Python - Size: 610 KB - Last synced at: 19 days ago - Pushed at: 19 days ago - Stars: 3 - Forks: 0
