Ecosyste.ms: Repos

An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: self-attention

cmhungsteve/Awesome-Transformer-Attention

An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites

Size: 4.96 MB - Last synced: 6 days ago - Pushed: 7 days ago - Stars: 4,305 - Forks: 474

sushantkumar23/nano-gpt

Simple character level Transformer

Language: Jupyter Notebook - Size: 467 KB - Last synced: 1 day ago - Pushed: 1 day ago - Stars: 0 - Forks: 0

orca-ai-research/orca-engine

A engine for the Orca AI architechure, written in Python.

Language: Python - Size: 39.1 KB - Last synced: 7 days ago - Pushed: 7 days ago - Stars: 1 - Forks: 0

PetarV-/GAT

Graph Attention Networks (https://arxiv.org/abs/1710.10903)

Language: Python - Size: 4.6 MB - Last synced: 8 days ago - Pushed: about 2 years ago - Stars: 3,089 - Forks: 637

Diego999/pyGAT

Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)

Language: Python - Size: 207 KB - Last synced: 8 days ago - Pushed: 11 months ago - Stars: 2,804 - Forks: 686

zhouhaoyi/Informer2020

The GitHub repository for the paper "Informer" accepted by AAAI 2021.

Language: Python - Size: 6.33 MB - Last synced: 8 days ago - Pushed: 3 months ago - Stars: 4,978 - Forks: 1,050

cx-olquinjica/Deep-Learning-Personal-Notebooks

This collection of notebooks is based on the Dive into Deep Learning Book. All of the notes are written in Pytorch and the d2l/torch library

Language: Jupyter Notebook - Size: 3.56 MB - Last synced: 8 days ago - Pushed: 8 days ago - Stars: 3 - Forks: 1

WenjieDu/SAITS

The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) deep-learning neural network model for efficient time-series imputation (impute multivariate incomplete time series containing NaN missing data/values with machine learning). https://arxiv.org/abs/2202.08516

Language: Python - Size: 587 KB - Last synced: 8 days ago - Pushed: about 1 month ago - Stars: 271 - Forks: 47

miniHuiHui/awesome-high-order-neural-network

Size: 36.1 KB - Last synced: 7 days ago - Pushed: 21 days ago - Stars: 27 - Forks: 3

wangxiao5791509/MultiModal_BigModels_Survey

[MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models

Size: 12.3 MB - Last synced: 9 days ago - Pushed: 9 days ago - Stars: 253 - Forks: 16

deshwalmahesh/ML-Models-from-Scratch

Repo for ML Models built from scratch such as Self-Attention, Linear +Logistic Regression, PCA, LDA. CNN, LSTM, Neural Networks using Numpy only

Language: Jupyter Notebook - Size: 38.4 MB - Last synced: 10 days ago - Pushed: 10 days ago - Stars: 8 - Forks: 1

Syeda-Farhat/awesome-Transformers-For-Segmentation

Semantic segmentation is an important job in computer vision, and its applications have grown in popularity over the last decade.We grouped the publications that used various forms of segmentation in this repository. Particularly, every paper is built on a transformer.

Size: 198 KB - Last synced: 6 days ago - Pushed: 24 days ago - Stars: 16 - Forks: 0

datawhalechina/leedl-tutorial

《李宏毅深度学习教程》(李宏毅老师推荐👍),冲击万星 ,感谢大家Star :star:!PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases

Language: Jupyter Notebook - Size: 283 MB - Last synced: 10 days ago - Pushed: 11 days ago - Stars: 9,919 - Forks: 2,555

daiquocnguyen/Graph-Transformer

Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)

Language: Python - Size: 109 MB - Last synced: 8 days ago - Pushed: almost 2 years ago - Stars: 622 - Forks: 77

veb-101/keras-vision

Porting vision models to Keras 3 for easily accessibility. Contains MobileViT v1.

Language: Jupyter Notebook - Size: 1.8 MB - Last synced: 12 days ago - Pushed: 12 days ago - Stars: 4 - Forks: 0

Separius/awesome-fast-attention 📦

list of efficient attention modules

Language: Python - Size: 156 KB - Last synced: 6 days ago - Pushed: almost 3 years ago - Stars: 978 - Forks: 108

GuanRunwei/Awesome-Vision-Transformer-Collection

Variants of Vision Transformer and its downstream tasks

Size: 59.6 KB - Last synced: 9 days ago - Pushed: almost 2 years ago - Stars: 187 - Forks: 18

DirtyHarryLYL/Transformer-in-Vision

Recent Transformer-based CV and related works.

Size: 1.84 MB - Last synced: 13 days ago - Pushed: 9 months ago - Stars: 1,295 - Forks: 141

NVlabs/FasterViT

[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention

Language: Python - Size: 1.28 MB - Last synced: 16 days ago - Pushed: about 1 month ago - Stars: 682 - Forks: 53

fudan-zvg/SOFT

[NeurIPS 2021 Spotlight] & [IJCV 2024] SOFT: Softmax-free Transformer with Linear Complexity

Language: Python - Size: 5.06 MB - Last synced: 10 days ago - Pushed: 2 months ago - Stars: 294 - Forks: 23

SQY2021/Effinformer_IEEE-TIM

Effinformer: A Deep-Learning-Based Data-Driven Modeling of DC–DC Bidirectional Converters (Published in: IEEE Transactions on Instrumentation and Measurement (*IEEE TIM*))

Language: Jupyter Notebook - Size: 1.32 MB - Last synced: 20 days ago - Pushed: 20 days ago - Stars: 1 - Forks: 0

VSainteuf/utae-paps

PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation.

Language: Jupyter Notebook - Size: 3.03 MB - Last synced: 14 days ago - Pushed: over 1 year ago - Stars: 123 - Forks: 52

naver-ai/egtr

[CVPR 2024 Oral] EGTR: Extracting Graph from Transformer for Scene Graph Generation

Size: 104 KB - Last synced: 15 days ago - Pushed: about 2 months ago - Stars: 15 - Forks: 0

The-AI-Summer/self-attention-cv

Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.

Language: Python - Size: 291 KB - Last synced: 19 days ago - Pushed: over 2 years ago - Stars: 1,143 - Forks: 148

github/CodeSearchNet 📦

Datasets, tools, and benchmarks for representation learning of code.

Language: Jupyter Notebook - Size: 28.6 MB - Last synced: 21 days ago - Pushed: over 2 years ago - Stars: 2,117 - Forks: 377

kevalmorabia97/Object-and-Semantic-Part-Detection-pyTorch

Joint detection of Object and its Semantic parts using Attention-based Feature Fusion on PASCAL Parts 2010 dataset

Language: Python - Size: 8.27 MB - Last synced: 24 days ago - Pushed: 25 days ago - Stars: 24 - Forks: 4

microsoft/DeBERTa

The implementation of DeBERTa

Language: Python - Size: 237 KB - Last synced: 21 days ago - Pushed: 8 months ago - Stars: 1,852 - Forks: 209

goutamyg/SMAT

[WACV 2024] Separable Self and Mixed Attention Transformers for Efficient Object Tracking

Language: Python - Size: 1.81 MB - Last synced: 26 days ago - Pushed: 26 days ago - Stars: 21 - Forks: 4

emadeldeen24/ECGTransForm

[Biomedical Signal Processing and Control] ECGTransForm: Empowering adaptive ECG arrhythmia classification framework with bidirectional transformer

Language: Python - Size: 1.11 MB - Last synced: 28 days ago - Pushed: 28 days ago - Stars: 10 - Forks: 1

zabaras/transformer-physx

Transformers for modeling physical systems

Language: Python - Size: 31.7 MB - Last synced: 29 days ago - Pushed: 10 months ago - Stars: 114 - Forks: 29

lucidrains/global-self-attention-network

A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks

Language: Python - Size: 95.7 KB - Last synced: 26 days ago - Pushed: over 3 years ago - Stars: 90 - Forks: 7

PrashantRanjan09/Structured-Self-Attentive-Sentence-Embedding

Implementation of the Paper Structured Self-Attentive Sentence Embedding published in ICLR 2017

Language: Python - Size: 279 KB - Last synced: about 1 month ago - Pushed: almost 6 years ago - Stars: 1 - Forks: 2

kaituoxu/Speech-Transformer

A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.

Language: Python - Size: 678 KB - Last synced: about 1 month ago - Pushed: about 1 year ago - Stars: 762 - Forks: 193

lifanchen-simm/transformerCPI

TransformerCPI: Improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments(BIOINFORMATICS 2020) https://doi.org/10.1093/bioinformatics/btaa524

Language: Python - Size: 47.2 MB - Last synced: 21 days ago - Pushed: almost 2 years ago - Stars: 128 - Forks: 40

akanimax/some-randon-gan-1

MSG-GAN with self attention. For MSG-GAN head to -> https://github.com/akanimax/MSG-GAN

Language: Python - Size: 58.4 MB - Last synced: about 1 month ago - Pushed: over 5 years ago - Stars: 2 - Forks: 2

Cuongvn08/An-Efficient-Framework-for-Vietnamese-Sentiment-Analysis

This paper consists of all source codes related to the paper "An Efficient Framework for Vietnamese Sentiment Analysis", SOMET 2020.

Language: Python - Size: 44 MB - Last synced: about 1 month ago - Pushed: almost 4 years ago - Stars: 2 - Forks: 2

anthony-wang/CrabNet

Predict materials properties using only the composition information!

Language: Python - Size: 429 MB - Last synced: 17 days ago - Pushed: about 1 year ago - Stars: 82 - Forks: 23

Ellon-M/AmbientGAN

A torch-trained network that leverages self-attention and LSTMs to generate piano notes from midi files.

Language: NewLisp - Size: 263 KB - Last synced: about 1 month ago - Pushed: about 2 years ago - Stars: 4 - Forks: 0

XiaShan1227/Graphormer

Do Transformers Really Perform Bad for Graph Representation? [NIPS-2021]

Language: Python - Size: 2.96 MB - Last synced: about 1 month ago - Pushed: about 1 month ago - Stars: 0 - Forks: 0

vmarinowski/infini-attention

An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'

Language: Python - Size: 34.2 KB - Last synced: about 1 month ago - Pushed: about 1 month ago - Stars: 2 - Forks: 0

piyop/tf_san

tensorflow2.x implementation for SAN model

Language: Python - Size: 7.81 KB - Last synced: about 1 month ago - Pushed: over 3 years ago - Stars: 0 - Forks: 0

ustcml/LISA

Pytorch implementation of LISA (Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021)

Language: Python - Size: 9.2 MB - Last synced: about 1 month ago - Pushed: over 2 years ago - Stars: 0 - Forks: 0

xxxnell/how-do-vits-work

(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"

Language: Python - Size: 18.3 MB - Last synced: about 1 month ago - Pushed: almost 2 years ago - Stars: 786 - Forks: 76

chris-caballero/Ticket-Classification-App

In this project I address the problem of support ticket classification by developing an encoder-only transformer using PyTorch.

Language: Python - Size: 161 MB - Last synced: about 1 month ago - Pushed: 5 months ago - Stars: 0 - Forks: 0

Tixierae/deep_learning_NLP

Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP

Language: Jupyter Notebook - Size: 105 MB - Last synced: 25 days ago - Pushed: 25 days ago - Stars: 435 - Forks: 106

VSainteuf/pytorch-psetae

PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"

Language: Python - Size: 1.98 MB - Last synced: about 1 month ago - Pushed: over 2 years ago - Stars: 164 - Forks: 35

sarthak-chakraborty/MinSSL

Minimally Supervised Semi-Supervised Learning

Language: Python - Size: 160 MB - Last synced: about 1 month ago - Pushed: over 3 years ago - Stars: 0 - Forks: 0

NVlabs/FAN

Official PyTorch implementation of Fully Attentional Networks

Language: Python - Size: 8.6 MB - Last synced: about 1 month ago - Pushed: about 1 year ago - Stars: 458 - Forks: 28

shamim-hussain/egt

Edge-Augmented Graph Transformer

Language: Python - Size: 150 KB - Last synced: about 1 month ago - Pushed: almost 2 years ago - Stars: 41 - Forks: 7

shamim-hussain/egt_pytorch

Edge-Augmented Graph Transformer

Language: Python - Size: 79.1 KB - Last synced: about 1 month ago - Pushed: 3 months ago - Stars: 67 - Forks: 9

alohays/awesome-visual-representation-learning-with-transformers

Awesome Transformers (self-attention) in Computer Vision

Size: 73.2 KB - Last synced: 6 days ago - Pushed: almost 3 years ago - Stars: 263 - Forks: 36

esceptico/perceiver-io

Unofficial implementation of Perceiver IO

Language: Python - Size: 16.6 KB - Last synced: 20 days ago - Pushed: almost 2 years ago - Stars: 117 - Forks: 6

haifangong/CMSA-MTPT-4-MedicalVQA

[ICMR'21, Best Poster Paper Award] Medical Visual Question Answering with Multi-task Pre-training and Cross-modal Self-attention

Language: Python - Size: 534 KB - Last synced: 15 days ago - Pushed: over 1 year ago - Stars: 32 - Forks: 2

sfu-mial/SLiMe

1-shot image segmentation using Stable Diffusion

Language: Python - Size: 23.5 MB - Last synced: about 2 months ago - Pushed: 8 months ago - Stars: 1 - Forks: 0

NVlabs/GCVit

[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers

Language: Python - Size: 858 KB - Last synced: about 1 month ago - Pushed: 5 months ago - Stars: 415 - Forks: 50

tensorops/TransformerX

Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)

Language: Python - Size: 508 KB - Last synced: 29 days ago - Pushed: 4 months ago - Stars: 52 - Forks: 9

binli123/dsmil-wsi

DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image

Language: Python - Size: 48.1 MB - Last synced: about 2 months ago - Pushed: 2 months ago - Stars: 302 - Forks: 82

flrngel/Self-Attentive-tensorflow 📦

Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"

Language: Python - Size: 1.4 MB - Last synced: about 1 month ago - Pushed: over 2 years ago - Stars: 192 - Forks: 39

speedinghzl/CCNet

CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).

Language: Python - Size: 3.88 MB - Last synced: 2 months ago - Pushed: about 3 years ago - Stars: 1,385 - Forks: 276

FutureComputing4AI/Hrrformer

Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)

Language: Python - Size: 126 KB - Last synced: 2 months ago - Pushed: 12 months ago - Stars: 36 - Forks: 2

Dorian25/colorization-transformer-experimental

Implémentation du papier Colorization Transformer (ICLR 2021) - Version Expérimentale

Language: Python - Size: 424 KB - Last synced: 11 days ago - Pushed: over 3 years ago - Stars: 17 - Forks: 3

GvHemanth/Transformers-based-Text-Summarization

Revolutionize text summarization with this Transformer model, leveraging state-of-the-art techniques. Trained on news articles, it produces concise summaries effortlessly. Explore cutting-edge capabilities for your summarization needs.

Language: Jupyter Notebook - Size: 12.3 MB - Last synced: 2 months ago - Pushed: 2 months ago - Stars: 1 - Forks: 0

freebeing1/Uniwin

PyTorch implementation of Uniwin("Image Super-resolution with Unified Window Attention".

Language: Python - Size: 35.5 MB - Last synced: 2 months ago - Pushed: 2 months ago - Stars: 9 - Forks: 0

L0SG/relational-rnn-pytorch

An implementation of DeepMind's Relational Recurrent Neural Networks (NeurIPS 2018) in PyTorch.

Language: Python - Size: 4.49 MB - Last synced: about 1 month ago - Pushed: over 5 years ago - Stars: 246 - Forks: 35

prakashpandey9/Text-Classification-Pytorch

Text classification using deep learning models in Pytorch

Language: Python - Size: 31.3 KB - Last synced: 2 months ago - Pushed: over 5 years ago - Stars: 801 - Forks: 237

zxuu/Self-Attention

Transformer的完整实现。详细构建Encoder、Decoder、Self-attention。以实际例子进行展示,有完整的输入、训练、预测过程。可用于学习理解self-attention和Transformer

Language: Python - Size: 4.55 MB - Last synced: 2 months ago - Pushed: 2 months ago - Stars: 19 - Forks: 3

Xiefeng69/SEFNet

[SEKE2022] Inter- and Intra-Series Embeddings Fusion Network for Epidemiological Forecasting (SEFNet)

Language: Python - Size: 128 KB - Last synced: about 1 month ago - Pushed: over 1 year ago - Stars: 8 - Forks: 1

jw9730/tokengt

[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch

Language: Python - Size: 1.23 MB - Last synced: 3 months ago - Pushed: about 1 year ago - Stars: 293 - Forks: 42

akanimax/fagan

A variant of the Self Attention GAN named: FAGAN (Full Attention GAN)

Language: Python - Size: 146 MB - Last synced: about 1 month ago - Pushed: over 5 years ago - Stars: 112 - Forks: 31

bhattbhavesh91/self-attention-python

This repository will guide you to implement a simple self-attention mechanism using the Python's NumPy library

Language: Jupyter Notebook - Size: 163 KB - Last synced: 27 days ago - Pushed: over 1 year ago - Stars: 3 - Forks: 1

cosbidev/MATNet

Multi-Level Fusion and Self-Attention Transformer-Based Model for Multivariate Multi-Step Day-Ahead PV Generation Forecasting

Language: Python - Size: 82.3 MB - Last synced: 3 months ago - Pushed: 3 months ago - Stars: 1 - Forks: 0

gordicaleksa/pytorch-GAT

My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!

Language: Jupyter Notebook - Size: 25.2 MB - Last synced: 3 months ago - Pushed: over 1 year ago - Stars: 2,232 - Forks: 310

aliasgharkhani/SLiMe

1-shot image segmentation using Stable Diffusion

Language: Python - Size: 31.1 MB - Last synced: 3 months ago - Pushed: 3 months ago - Stars: 95 - Forks: 8

kaushalshetty/Structured-Self-Attention

A Structured Self-attentive Sentence Embedding

Language: Python - Size: 492 KB - Last synced: about 1 month ago - Pushed: over 4 years ago - Stars: 493 - Forks: 109

AmayaGS/MUSTANG

Multi-stain graph self attention multiple instance learning for histopathology Whole Slide Images - BMVC 2023

Language: Python - Size: 2.58 MB - Last synced: 3 months ago - Pushed: 3 months ago - Stars: 11 - Forks: 2

brightmart/bert_language_understanding

Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN

Language: Python - Size: 16 MB - Last synced: 3 months ago - Pushed: over 5 years ago - Stars: 959 - Forks: 212

cbaziotis/neat-vision

Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)

Language: Vue - Size: 25.4 MB - Last synced: 2 months ago - Pushed: about 6 years ago - Stars: 250 - Forks: 26

tiongsikng/gc2sa_net

Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics

Language: Jupyter Notebook - Size: 13.6 MB - Last synced: 3 months ago - Pushed: 3 months ago - Stars: 2 - Forks: 1

hrithickcodes/transformer-tf

This repository contains the code for the paper "Attention Is All You Need" i.e The Transformer.

Language: Jupyter Notebook - Size: 30.4 MB - Last synced: about 2 months ago - Pushed: over 1 year ago - Stars: 6 - Forks: 1

Namkwangwoon/Saliency-Attention-based-DETR

SA-DETR: Saliency Attention-based DETR for Salienct Object Detection

Language: Python - Size: 324 KB - Last synced: about 1 month ago - Pushed: 4 months ago - Stars: 1 - Forks: 0

jw9730/hot

[NeurIPS'21] Higher-order Transformers for sets, graphs, and hypergraphs, in PyTorch

Language: Python - Size: 1.95 MB - Last synced: 3 months ago - Pushed: over 1 year ago - Stars: 57 - Forks: 6

VSainteuf/lightweight-temporal-attention-pytorch

A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification

Language: Python - Size: 935 KB - Last synced: 4 months ago - Pushed: almost 4 years ago - Stars: 78 - Forks: 17

genifyai/banking-reco-profile-icaif22

Code for ICAIF '22 paper "Sequential Banking Products Recommendation and User Profiling in One Go"

Language: Python - Size: 14.9 MB - Last synced: about 1 month ago - Pushed: over 1 year ago - Stars: 2 - Forks: 0

thinhcse/stock-price-forecasting

Example of stock price forecasting for S&P500 index (Pytorch, data crawling with Selenium)

Language: Python - Size: 1.31 MB - Last synced: 4 months ago - Pushed: 4 months ago - Stars: 0 - Forks: 0

PetropoulakisPanagiotis/gpt-practice

GPT code - I completed the tutorial for building GPT components by Andrej Karpathy (Let's build GPT: from scratch, in code, spelled out)

Language: Jupyter Notebook - Size: 430 KB - Last synced: 4 months ago - Pushed: 8 months ago - Stars: 2 - Forks: 0

Audio-WestlakeU/FS-EEND

The official Pytorch implementation of "Frame-wise streaming end-to-end speaker diarization with non-autoregressive self-attention-based attractors". [ICASSP 2024]

Language: Python - Size: 423 KB - Last synced: 4 months ago - Pushed: 4 months ago - Stars: 50 - Forks: 3

yaokui2018/SentimentAnalysis

中文情感分类 | 基于三分类的文本情感分析

Language: Python - Size: 69.8 MB - Last synced: 4 months ago - Pushed: 7 months ago - Stars: 3 - Forks: 1

ucalyptus/DARecNet-BS

[IEEE GRSL] - DARecNet-BS: Unsupervised Dual Attention Reconstruction Network for Hyperspectral Band Selection

Language: Jupyter Notebook - Size: 4.01 MB - Last synced: 4 months ago - Pushed: almost 3 years ago - Stars: 38 - Forks: 2

XinyuanLiao/AttnPINN-for-RUL-Estimation

A Framework for Remaining Useful Life Prediction Based on Self-Attention and Physics-Informed Neural Networks

Language: Python - Size: 5.11 MB - Last synced: 4 months ago - Pushed: 4 months ago - Stars: 39 - Forks: 7

Crush0416/KDNets

Multiple domain characteristics joint learning of SAR targets for SAR-ATR

Language: Python - Size: 24.4 KB - Last synced: 4 months ago - Pushed: 6 months ago - Stars: 2 - Forks: 0

richardRadli/pedestrian_detection

Implementation of different object detection neural networks, in order to detect pedestrians.

Language: Python - Size: 315 KB - Last synced: 4 months ago - Pushed: 4 months ago - Stars: 0 - Forks: 0

dcarpintero/transformer101

Annotated vanilla implementation in PyTorch of the Transformer model introduced in 'Attention Is All You Need'

Language: Jupyter Notebook - Size: 215 KB - Last synced: 3 months ago - Pushed: 3 months ago - Stars: 1 - Forks: 0

sambitbhaumik/siamese-nn-sts

Project files contain PyTorch implementations for Siamese BiLSTM models for Semantic Text Similarity task on the SICK Dataset using FastText embeddings. Also contains Siamese BiLSTM-Transformer Encoder and SBERT fine-tuning implementations on the STS Data tasks.

Language: Jupyter Notebook - Size: 24.2 MB - Last synced: 5 months ago - Pushed: almost 2 years ago - Stars: 2 - Forks: 1

johnjaejunlee95/SA_ConvLSTM

Language: Python - Size: 16.2 MB - Last synced: 5 months ago - Pushed: 5 months ago - Stars: 0 - Forks: 0

awsaf49/gcvit-tf

Tensorflow 2.0 Implementation of GCViT: Global Context Vision Transformer

Language: Jupyter Notebook - Size: 27.6 MB - Last synced: 30 days ago - Pushed: 5 months ago - Stars: 21 - Forks: 6

Vaishnvi/ILLUME

To miss-attend is to misalign! Residual Self-Attentive Feature Alignment for Adapting Object Detectors, WACV 2022

Language: Python - Size: 613 KB - Last synced: 5 months ago - Pushed: over 1 year ago - Stars: 12 - Forks: 0

HEMANGANI/Music-Generation-Using-WGAN-GP-and-Self-Attention-Mechanism

Developed a music generation deep learning model using WGAN-GP and self-attention, aimed at creating melodic compositions.

Language: Jupyter Notebook - Size: 21.3 MB - Last synced: 4 months ago - Pushed: 5 months ago - Stars: 2 - Forks: 0

nguyenvo09/Double-Attention-Network

This is the PyTorch implementation of Double Attention Network, NIPS 2018

Language: Python - Size: 189 KB - Last synced: 5 months ago - Pushed: over 4 years ago - Stars: 29 - Forks: 4

ubisoft/ubisoft-laforge-daft-exprt

PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Language: Python - Size: 1.44 MB - Last synced: 5 months ago - Pushed: about 1 year ago - Stars: 111 - Forks: 25

partarstu/transformers-in-java

Experimental project for AI and NLP based on Transformer Architecture

Language: Java - Size: 443 KB - Last synced: 5 months ago - Pushed: 5 months ago - Stars: 7 - Forks: 1

Related Keywords
self-attention 261 pytorch 100 deep-learning 78 transformer 69 attention-mechanism 44 attention 39 nlp 34 transformers 30 machine-learning 25 tensorflow 25 python 25 computer-vision 20 cnn 14 vision-transformer 13 natural-language-processing 12 time-series 12 bert 12 image-classification 11 attention-is-all-you-need 11 multihead-attention 10 generative-adversarial-network 10 text-classification 10 python3 10 rnn 9 language-model 9 object-detection 8 semantic-segmentation 8 self-attentive-rnn 8 neural-networks 8 keras 8 gan 8 sentiment-analysis 8 transformer-architecture 8 pytorch-implementation 7 deep-neural-networks 7 lstm 7 forecasting 6 transfer-learning 6 question-answering 6 segmentation 6 encoder-decoder 6 reinforcement-learning 5 self-supervised-learning 5 neural-machine-translation 5 transformer-models 5 multi-head-attention 5 sentence-embeddings 5 numpy 4 unsupervised-learning 4 multiple-instance-learning 4 sentiment-classification 4 gnn 4 imagenet 4 bilstm 4 backbone 4 convolutional-neural-networks 4 representation-learning 4 classification 4 pre-trained-model 4 artificial-intelligence 4 neural-network 4 graph-attention-networks 4 vit 3 llm 3 image-segmentation 3 cross-attention 3 mobilevit 3 remote-sensing 3 word-embeddings 3 positional-encoding 3 recurrent-neural-networks 3 ai 3 huggingface 3 image-recognition 3 domain-adaptation 3 vision-transformers 3 ade20k 3 coco 3 vae 3 speech-synthesis 3 visual-recognition 3 bidaf 3 text-to-speech 3 tts 3 roberta 3 natural-language-understanding 3 deeplearning 3 seq2seq 3 data-science 3 nlp-machine-learning 3 sagan 3 translation 3 multi-modal 3 attention-model 3 embeddings 3 detr 3 ml 3 tensorflow2 3 awesome-list 3 attention-mechanisms 3