An open API service providing repository metadata for many open source software ecosystems.

Topic: "positional-encoding"

lucidrains/rotary-embedding-torch

Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch

Language: Python - Size: 102 KB - Last synced at: 18 days ago - Pushed at: 6 months ago - Stars: 677 - Forks: 57

vijaydwivedi75/gnn-lspe

Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022

Language: Python - Size: 267 KB - Last synced at: 2 months ago - Pushed at: over 3 years ago - Stars: 253 - Forks: 36

yiqun-wang/PET-NeuS

PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces (CVPR 2023)

Language: Python - Size: 3.41 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 249 - Forks: 5

universome/inr-gan

[CVPR 2021] Adversarial Generation of Continuous Images

Language: Python - Size: 1.5 MB - Last synced at: over 1 year ago - Pushed at: over 3 years ago - Stars: 228 - Forks: 23

ziplab/FASeg

[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".

Language: Python - Size: 426 KB - Last synced at: about 1 year ago - Pushed at: over 2 years ago - Stars: 54 - Forks: 2

willGuimont/learnable_fourier_positional_encoding

Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding

Language: Python - Size: 6.84 KB - Last synced at: 28 days ago - Pushed at: 8 months ago - Stars: 48 - Forks: 9

gcambara/cape

Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

Language: Python - Size: 59.6 KB - Last synced at: 2 days ago - Pushed at: over 2 years ago - Stars: 40 - Forks: 3

VITA-Group/Ms-PoE

"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.

Language: Python - Size: 7.53 MB - Last synced at: 3 days ago - Pushed at: about 1 year ago - Stars: 29 - Forks: 3

osiriszjq/complex_encoding

Trading Positional Complexity vs Deepness in Coordinate Networks

Language: Jupyter Notebook - Size: 47.6 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 28 - Forks: 0

axiomlab/Cable

Context-aware Biases for Length Extrapolation

Language: Python - Size: 721 KB - Last synced at: 15 days ago - Pushed at: 15 days ago - Stars: 21 - Forks: 8

HySonLab/Multires-Graph-Transformer

Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures

Language: Python - Size: 7.22 MB - Last synced at: about 1 year ago - Pushed at: over 1 year ago - Stars: 12 - Forks: 3

AryaAftab/rotary-embedding-tensorflow

Implementation of Rotary Embeddings, from the Roformer paper, in Tensorflow

Language: Python - Size: 6.84 KB - Last synced at: 16 days ago - Pushed at: over 3 years ago - Stars: 12 - Forks: 2

JHLew/Learnable-Fourier-Features

Unofficial pytorch implementation of the paper "Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding", NeurIPS 2021.

Language: Python - Size: 5.86 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 10 - Forks: 0

VITA-Group/TAPE

[Preprint] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi Cai, Jason D. Lee, Pan Li, Zhangyang Wang

Language: Python - Size: 16.6 MB - Last synced at: 2 days ago - Pushed at: 2 days ago - Stars: 9 - Forks: 0

liaoyanqing666/transformer_pytorch

完整的原版transformer程序,complete origin transformer program

Language: Python - Size: 17.6 KB - Last synced at: 2 months ago - Pushed at: 3 months ago - Stars: 9 - Forks: 0

osiriszjq/RobustPPE

Robust Point Cloud Processing through Positional Embedding

Language: HTML - Size: 37 MB - Last synced at: over 1 year ago - Pushed at: almost 2 years ago - Stars: 7 - Forks: 0

ETH-DISCO/Benchmarking-PEs

Benchmarking Positional Encodings for GNNs and Graph Transformers

Language: Python - Size: 35.9 MB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 6 - Forks: 1

harveybc/feature-extractor

Application for training an autoencoder for generating an encoder that can be used as feature extractor for dimensionality and noise reduction, while the decoder can be used for synthetic data generation. Supports dynamic plugin integration, allowing users to extend its capabilities by adding custom encoder and decoder models.

Language: Python - Size: 182 MB - Last synced at: 14 days ago - Pushed at: 15 days ago - Stars: 5 - Forks: 0

PKU-ML/LaplacianCanonization

Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".

Language: Python - Size: 88.9 KB - Last synced at: about 1 year ago - Pushed at: over 1 year ago - Stars: 5 - Forks: 1

xmarva/transformer-architectures

Teaching transformer-based architectures

Language: Jupyter Notebook - Size: 242 KB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 4 - Forks: 0

konstantinosKokos/ape

Algebraic Positional Encodings

Language: Python - Size: 265 KB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 4 - Forks: 0

Ave-Sergeev/Tictonix

Crate for `Embedings` and `Positional Encoding` (Rust) 2025

Language: Rust - Size: 1.77 MB - Last synced at: 18 days ago - Pushed at: 18 days ago - Stars: 3 - Forks: 0

imics-lab/positional-encoding-benchmark

This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.

Language: Jupyter Notebook - Size: 4.9 MB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 3 - Forks: 0

bkhanal-11/transformers

The implementation of transformer as presented in the paper "Attention is all you need" from scratch.

Language: Python - Size: 291 KB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 3 - Forks: 0

Breeze648/Transformer-from-Scratch

本仓库定位为 AI论文复现 / 从零实现 Transformer。 代码遵循原论文的模块划分,包含位置编码、多头注意力、前馈网络、编码器‑解码器等全部组件,并附带详细的中文拆解文档与英文注释,方便学习与二次开发。

Language: Python - Size: 1.17 MB - Last synced at: 7 days ago - Pushed at: about 1 month ago - Stars: 2 - Forks: 1

mpalaourg/PGL-SUM Fork of e-apostolidis/PGL-SUM

A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021

Language: Python - Size: 89.6 MB - Last synced at: almost 2 years ago - Pushed at: almost 3 years ago - Stars: 2 - Forks: 1

Dhanush-R-git/MH-Analysis

The MHRoberta is Mental Health Roberta model. The pretrained Roberta transformer based model fine-tunned on Mental Health dataset by adopting PEFT method.

Language: Jupyter Notebook - Size: 3.67 MB - Last synced at: 24 days ago - Pushed at: 24 days ago - Stars: 1 - Forks: 0

devrahulbanjara/Transformers-from-Scratch

A repository implementing Transformers from scratch using PyTorch, designed to build a deeper understanding of their architecture by coding core components step-by-step.

Language: Jupyter Notebook - Size: 423 KB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 1 - Forks: 0

jhaayush2004/My-Transformer

Code implementation of Transformer Model in "Attention is All You Need" in PyTorch.

Language: Jupyter Notebook - Size: 5.25 MB - Last synced at: about 2 months ago - Pushed at: about 1 year ago - Stars: 1 - Forks: 0

dcarpintero/transformer101

Annotated vanilla implementation in PyTorch of the Transformer model introduced in 'Attention Is All You Need'.

Language: Jupyter Notebook - Size: 215 KB - Last synced at: 6 days ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 0

GeorgeMLP/laplacian-canonization

Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".

Language: Python - Size: 88.9 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 0

SpydazWebAI-NLP/BasicCorpus2023

A Basic Corpus Object , Giving Positional Encoding / Decoding . ,A Fully Loaded Corpus = Corpus > Document > Sentences > Clauses > Words

Language: Visual Basic .NET - Size: 2.63 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 1 - Forks: 1

Aalto-QuML/PIPE

Positional Encoding meets Persistent Homology on Graphs

Language: Python - Size: 29.7 MB - Last synced at: 3 days ago - Pushed at: 3 days ago - Stars: 0 - Forks: 0

mikeendarson/Deepdive-llama3-from-scratch

Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.

Language: Jupyter Notebook - Size: 16.6 MB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

iamdebasishdas123/Building-LLM-From-Scratch

Language: Jupyter Notebook - Size: 38.1 KB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

KhushiRajurkar/Vision-Transformer-Image-Classification

A Vision Transformer (ViT) implementation for image classification using CIFAR-10 dataset, leveraging HuggingFace's Trainer API for computational efficiency

Language: Jupyter Notebook - Size: 33.2 KB - Last synced at: 2 months ago - Pushed at: 5 months ago - Stars: 0 - Forks: 0

surafiel-habib/Transformer-Based-Amharic-to-English-Machine-Translation-with-Character-Embedding-and-Combined-Regul

Language: Jupyter Notebook - Size: 9.04 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 0 - Forks: 0

SrinithiSaiprasath/Transformer 📦

Transformer based chatbot based on "Attention is all you need"

Language: Jupyter Notebook - Size: 10.1 MB - Last synced at: 9 months ago - Pushed at: 9 months ago - Stars: 0 - Forks: 0

areeba0/English-to-French-Translation-using-NLTK-and-Hugging-Face-Transformers-MarianMTModel

This repository provides a complete workflow for text processing using Hugging Face Transformers and NLTK. It includes modules for sentence normalization, spelling correction, word embedding generation, positional encoding computation, and English-to-French translation

Language: Jupyter Notebook - Size: 8.79 KB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

Khsaadi/built_basic_transformer

This repo contains a basic transformer implementation

Language: Jupyter Notebook - Size: 4.88 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

omkar-nitsure/Human_Vs_ChatGPT

This is a novel Transformer network based approach to distinguish ChatGPT generated Text from Human text. The model was also deployed on local server using Flask where Docker was used to manage all dependencies.

Language: Python - Size: 159 MB - Last synced at: 9 months ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

peluche/self-attention

Comparison of positional encoding schemes in transformer

Language: Jupyter Notebook - Size: 27.5 MB - Last synced at: about 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

GeorgeMLP/basis-invariance-synthetic-experiment

Basis invariance synthetic experiment in Appendix D of NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".

Language: Python - Size: 5.86 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

tanjeffreyz/attention-is-all-you-need

PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin

Language: Python - Size: 770 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

SpydazWebAI-NLP/BasicPositionalEncoderDecoder2023

The Positional Encoder Decoder is a Visual Basic .NET class that provides functionality for encoding and decoding tokens and sentences using positional embeddings. It allows you to convert between string tokens and their corresponding embeddings, and vice versa.

Language: Visual Basic .NET - Size: 25.4 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 1

TillBeemelmanns/learnable_fourier_positional_encoding

Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding - Tensorflow

Language: Jupyter Notebook - Size: 22.5 KB - Last synced at: 3 months ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

tate8/translator

Transformer translator website with multithreaded web server in Rust

Language: Rust - Size: 19.5 MB - Last synced at: about 2 years ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

antonio-f/PositionalEncoding

Positional encoding example

Language: Python - Size: 1.95 KB - Last synced at: 2 months ago - Pushed at: over 3 years ago - Stars: 0 - Forks: 0

JudePark96/positional-encoding

Language: Python - Size: 0 Bytes - Last synced at: over 2 years ago - Pushed at: over 5 years ago - Stars: 0 - Forks: 0

Related Topics
transformer 15 deep-learning 9 attention-mechanism 9 self-attention 8 pytorch 8 transformers 6 python 5 gnn 5 machine-learning 4 rope 4 attention-is-all-you-need 4 nlp 4 encoder-decoder 4 tensorflow 3 large-language-models 3 graph-neural-networks 3 laplacian-eigenmaps 3 multihead-attention 3 feedforward-neural-network 3 graph-transformer 3 spectral-embedding 3 vision-transformer 2 alibi 2 encoder-decoder-architecture 2 keras 2 computer-vision 2 rust 2 multi-head-attention 2 nlp-machine-learning 2 large-language-model 2 huggingface-transformers 2 encoder-decoder-model 2 transfomer 2 deeplearning 2 natural-language-processing 2 representation-learning 2 llm 2 gpt 2 attention 2 lrgb 1 geometric-deep-learning 1 gnn-lspe 1 graph-deep-learning 1 graph-representation-learning 1 graphs 1 lspe 1 message-passing 1 molecules 1 transformer-networks 1 roberta-model 1 corpus-tools 1 inference 1 kv-cache 1 language-model 1 llama 1 llm-configuration 1 llms 1 mask 1 rms 1 rotary-position-encoding 1 beginner 1 cifar-10 1 data-augmentation 1 huggingface 1 image-classification 1 model-evaluation 1 neural-networks 1 patch-encoding 1 trainer-api 1 transfer-learning 1 expressiveness 1 gnn-benchmark 1 roberta-tokenizer 1 long-context 1 lost-in-the-middle 1 embeddings 1 math 1 artificial-intelligence 1 tensorflow2 1 synthetic-data 1 auto-regressive-model 1 cable 1 context-aware 1 length-extrapolation 1 ann 1 autoencoder 1 cnn 1 cnn-keras 1 dimensionality-reduction 1 reusable-components 1 lstm 1 plugin-system 1 pretraining 1 pretrained-models 1 pretrained-weights 1 cross-attention 1 transformer-architecture 1 deep-neural-networks 1 persistent-homology-graphs 1 dot-product-attention 1