GitHub topics: neural-ranking
mrjleo/fast-forward-indexes
Efficient interpolation-based ranking on CPUs
Language: Python - Size: 2.72 MB - Last synced at: 2 days ago - Pushed at: 2 days ago - Stars: 11 - Forks: 8

sebastian-hofstaetter/matchmaker
Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
Language: Python - Size: 10 MB - Last synced at: 3 months ago - Pushed at: about 2 years ago - Stars: 261 - Forks: 29

grill-lab/CODEC
CODEC is a document and entity ranking dataset that focuses on complex essay-style topics.
Language: Shell - Size: 17.2 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 16 - Forks: 1

jingtaozhan/RepCONC
WSDM'22 Best Paper: Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval
Language: Python - Size: 479 KB - Last synced at: over 1 year ago - Pushed at: almost 3 years ago - Stars: 114 - Forks: 11

canjiali/PARADE
code and data to faciliate BERT/ELECTRA for document ranking. Details refer to the paper - PARADE: Passage Representation Aggregation for Document Reranking.
Language: Python - Size: 127 KB - Last synced at: over 1 year ago - Pushed at: about 2 years ago - Stars: 95 - Forks: 10

sebastian-hofstaetter/tripclick
Establishing Strong Baselines for TripClick Health Retrieval; ECIR 2022
Size: 7.81 KB - Last synced at: 3 months ago - Pushed at: over 3 years ago - Stars: 5 - Forks: 0

jingtaozhan/disentangled-retriever
An easy-to-use python toolkit for flexibly adapting various neural ranking models to any target domain.
Language: Python - Size: 906 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 30 - Forks: 3

grill-lab/DL-Hard
Deep Learning Hard (DL-HARD) is a new annotated dataset extending TREC Deep Learning benchmark.
Size: 17.3 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 29 - Forks: 4

chengxuanying/WSDM-Adhoc-Document-Retrieval
This is our solution for WSDM - DiggSci 2020. We implemented a simple yet robust search pipeline which ranked 2nd in the validation set and 4th in the test set. We won the gold prize at innovation track and bronze prize at dataset track.
Language: Jupyter Notebook - Size: 53.7 KB - Last synced at: about 2 years ago - Pushed at: almost 5 years ago - Stars: 59 - Forks: 12

lavis-nlp/CoRT
Code repository of the NAACL'21 paper "CoRT: Complementary Rankings from Transformers"
Language: Python - Size: 32.2 KB - Last synced at: about 2 years ago - Pushed at: almost 4 years ago - Stars: 11 - Forks: 0
